Would a savant who has read all the books in a specific library, whether in college or even the Library of Congress, be able to create original works without facing criticism for using knowledge gained from the works of others? That’s a question I pose to you, the readers of this blog. Your understanding of this concept is crucial. How does my hypothetical savant differ from a typical large language model trained on web content?
Copyright law, a crucial aspect of our modern content creation landscape, was initially intended to incentivize authors to produce new works by granting them exclusive rights to their writing for a limited time. This was aimed at advancing the progress of science and learning by ensuring that works are accessible to the public.
The Copyright Act of 1790 gave American authors the exclusive right to print, reprint, or publish their works for 14 years, possibly renewing the protection for another 14 years. This law encouraged authors, artists, and scientists to produce original creations.
Revolutionizing copyright laws to both protect and empower modern content creators while not hindering the capabilities of advanced technologies like Large Language Models (LLMs), would involve a multifaceted approach:
Balancing Interests
Revised laws could strike an equilibrium between protecting original work and fostering innovation. This could be achieved by clearly defining what constitutes fair use, particularly in AI-generated content, and establishing transparent guidelines for attribution and compensation when such technologies use existing copyrighted materials as part of their learning process.
New Licensing Models
Implement licenses that cater specifically to LLMs, allowing them access to copyrighted works under certain conditions without infringing on the rights of original creators. This could involve pay-per-use models or subscriptions for AI developers who use these technologies.
Innovation Incentives
Offer additional protections and benefits for content creators to encourage them to invest time in creating new, innovative works that LLMs can use without legal repercussions—akin to a “digital commons.” These could include tax breaks or grants.
Adaptive Legislation
Laws should be designed to adapt as technology evolves, potentially incorporating AI-driven governance systems that can continuously interpret and apply copyright law based on current technological capabilities and social norms.
Some of the ideas in this post came from using Ollama and the Phi-3:medium model.
Years ago, I watched a TED talk by Larry Lessig about laws that stifle creativity. He made several excellent points in his speech, and it got me thinking about whether we are reaching a critical point in terms of laws regulating the use of generative AI. Recently, I listened to a podcast where the host claimed that there is no truly open-source AI and that, eventually, an incestuous situation could develop due to web scraping to train large language models (LLMs). This could lead to the creation of content by these LLMs and the recreation of content from the content created by the large language models, potentially resulting in a twenty-first-century Tower of Babel.
Do we need to build on the ideas presented in Larry’s influential talk to adapt to the current reality? Will large language models and other forms of artificial intelligence lower the quality of our culture and intelligence, or will they enhance culture and creativity as we’ve seen in the seventeen years since his talk?
Now that I’m working with Ollama, I needed to figure out how to locate the models on my storage medium and determine the amount of space they were occupying. This is especially important for the MacBook Air, which has a 256 GB drive that is already half full. My Linux computer, on the other hand, has a terabyte NVME drive, so I’m not as worried about storage there. However, I still like to track where everything is and how much storage it uses. I wanted to compile a list of the downloaded models and learn how to delete them.
After Ollama is installed you can get a list of the commands available to you by entering the following command in the terminal.
$ ollama help
Usage: ollama [flags] ollama [command]
Available Commands: serve Start ollama create Create a model from a Modelfile show Show information for a model run Run a model pull Pull a model from a registry push Push a model to a registry list List models ps List running models cp Copy a model rm Remove a model help Help about any command
Flags: -h, --help help for ollama -v, --version Show version information
Check the version of ollama you are using with:
$ ollama -v
ollama version is 0.3.9
Downloading models is easy using ‘ollama pull’
$ ollama pull gemma2
This downloads the model to your local computer. You can also initiate a pull request by running the following command:
$ ollama run phi3
This command sequence pulls the ‘phi3’ model down to your computer. After I had tried a few models I wanted to get a sense of how much space they were taking on my system and I used the ‘list’ or ‘ls’ switch of the ‘ollama’ command.
$ ollama list
NAME ID SIZE MODIFIED
llama3.1:latest f66fc8dc39ea 4.7 GB 2 hours ago
codegemma:latest 0c96700aaada 5.0 GB 43 hours ago
phi3:latest 4f2222927938 2.2 GB 47 hours ago
gemma2:latest ff02c3702f32 5.4 GB 3 days ago
phi3:medium cf611a26b048 7.9 GB 3 days ago
gemma2:2b 8ccf136fdd52 1.6 GB 3 days ago
Now I can see how much storage the models are using. It’s easy to delete a model. I decided to delete the oldest one first.
$ ollama rm gemma2
deleted 'gemma2'
Just to be sure, I ran ‘ollama ls’; sure enough, the model was gone.
NAME ID SIZE MODIFIED
llama3.1:latest f66fc8dc39ea 4.7 GB 2 hours ago
codegemma:latest 0c96700aaada 5.0 GB 44 hours ago
phi3:latest 4f2222927938 2.2 GB 47 hours ago
phi3:medium
You can check on how many models are running on your machine with the following command:
$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
phi3:latest 4f2222927938 6.0 GB 100% CPU 4 minutes from now
Using the ‘show’ command shows you information about the model.
ollama show phi3
Model
arch phi3
parameters 3.8B
quantization Q4_0
context length 131072
embedding length 3072
Parameters
stop "<|end|>"
stop "<|user|>"
stop "<|assistant|>"
License
Microsoft.
Copyright (c) Microsoft Corporation.
Stop a model from running by pressing CTRL +c. Exit the model by entering CTRL +d. In conclusion, Ollama has revolutionized our understanding and interaction with AI.
In the realm of spirituality and divine presence, one often encounters discussions revolving around gender-specific depictions of deities or higher powers within various religious traditions and cultural mythologies. While many have historically leaned towards masculine representations—often associated with strength, authority, and protection—the conceptualization of a feminine higher power brings to light the rich tapestry of female energy that permeates existence.
A woman as an embodiment or metaphor for divine love is not merely confined within ancient scriptures; contemporary spirituality also recognizes her influence in shaping our connections with ourselves, others, and ultimately all creation through this feminine higher power conceptualization. This essay delves into the characteristics of such a figure—one who might be envisioned as an ethereal being interwoven within nature’s own rhythm while guiding humanity towards harmonious coexistence with her nurturing, compassionate presence and wisdom that echoes through ages.
Firstly, the feminine higher power embodies a sense of balance—the Yin to masculine Yang in many spiritual frameworks—by offering healing energy during times when society has been imbalanced by war or conflict, poverty or social inequality, environmental degradation or collective despair. She is not merely an observer but rather actively engages with the Earth’s plight and seeks ways to mend its wounds through advocacy for ecological preservation, sustainability practices, and reverence towards natural resources that all too often are exploited without restraint or gratitude
Moreover, a feminine higher power exudes warmth and compassion—a beacon of light amidst the darkness within ourselves as well as in others who may suffer from isolation, trauma, mental illnesses, substance abuse issues among other things; all these struggles are not only acknowledged but addressed with genuine empathy. The nurturing essence is further expressed through fostering interconnected relationships amongst humans and the environment alike where every living being feels heard, seen, loved—a testament to her boundless love for creation in its entirety without conditions or boundaries attached thereto by societal constructs of gender roles; this inclusive approach invites everyone into experiencing wholeness under one roof.
Additionally, the feminine higher power embodies intuition and wisdom beyond rational thought processes—an innate sense that transcends conventional knowledge acquired through academics or professional careers alone which offers deeper insights towards understanding complex issues facing humanity such as climate change crises affecting indigenous populations worldwide who suffer from its consequences most severely. She holds an unconditional love for all living beings, including non-human entities like animals and plants whose existence is often disregarded by humans in their quest to dominate nature rather than coexist with it harmoniously; this divine feminine presence advocates respecting the interconnectedness of life where every being plays a vital role within an intricate web that forms our beautiful planet Earth.
Furthermore, she symbolizes creativity and artistic expression—a powerful tool used to heal wounds caused by societal oppressions rooted in gender-based discrimination; the embodiment of this force manifests itself through various mediums such as dance forms like Sufi whirling or Shakti Tandava Nritya which transcend language barriers and bridge cultural divides while also offering therapeutic benefits for mental health. Her creativity lies not only within art but extends to innovative solutions towards solving global problems that require collaborations between diverse communities—where ideas flow freely across cultures, religions or social backgrounds without judgment-driven barriers inhibiting mutual understanding and respect amongst humanity at large; this is an embodiment of her visionary leadership style where she guides societies towards progressive change through dialogue rather than imposing authority.In conclusion, a feminine higher power represents not only love but also patience—the willingness to nurture growth within ourselves and others despite external circumstances that may present obstacles along the way; this divine presence serves as an anchor for those seeking solace amidchaos or guidance through life’s complexities with graceful resilience.
Embracing her feminine essence can inspire individuals across genders to embody compassion, creativity and intuition while promoting balance within personal relationships—as well as fostering unity amongst global communities that transcend traditional barriers imposed by societal norms which have historically led towards division rather than collective growth; this invites us all into embracing a more inclusive worldview where every voice is heard, valued and respected regardless of gender or cultural background.
In essence, the conceptualization of feminine higher power serves as an empowering tool for spiritual seekers yearning to embrace love’s transformative energy within themselves while extending it towards others—an everlasting reminder that true strength lies not in dominance but interdependence rooted deeply-embedded values such as compassion, creativity and intuition which nurture the Earth itself back into equilibrium; this divine feminine presence calls upon humanity to evolve beyond our flawed perceptions of gender roles while uniting us towards global harmony that honors every living being within creation’s vast tapestry.
From a writing prompt, “Write an essay about what a feminine higher power might be like. (OLlama with Phi3 model) “
Assault weapon ownership, often misinterpreted as an expression of patriotic zeal or individual right within the United States, may in fact mirror symptoms and patterns indicative of a broader societal condition –that of psychological distress stemming from unchecked aggression. At face value, these formidable weapons are legal under current regulations; however, they represent more than mere tools for self-defense or hunting but symbolize an eroding demarcation between firearm and mentality as conduits to violence when misused by individuals with antisocial traits potentially indicative of underlying mental health conditions.
The prevalence of assault weapons among mass shooters hints at a disturbing correlation, not necessarily causal, but significant enough that it invites further scrutiny into the psychological makeup and intentions behind such acts – an exploration that could be beneficial in averting future tragedies. As we grapple with mental health crises affecting various demographics within society, including a disproportionately higher prevalence of diagnosed conditions among gun owners compared to non-owners, the link between weapon ownership and aggression becomes even more pertinent for discussion amongst psychologists.
Furthermore, it is worth considering whether allowing accessibility to such potent armaments inadvertently encourages a culture that normalizes or romanticizes violence as an acceptable form of expression – traits that may align with impulsive and aggressive behavior often associated with psychological profiles. This could potentially foster resentment among those who feel marginalized, further exacerbating societal rifts already present in the American cultural fabric.
The potential for these devices to be wielded as tools of terror or intimidation cannot be overlooked; their very existence within civilian hands poses an unprecedented risk that society must address holistically – incorporating insights from mental health professionals and sociologists alike, with a shared vision towards fostering empathy, understanding, healing wounds of disparities that feed resentment in the heartland. In essence, assault weapon ownership may be symptomatic, not just of individual distress but reflective of broader social pathologies calling for urgent and comprehensive societal introspection – recognizing mental health as integral to our national psyche’s well-being demands more than legislation; it calls upon a collective moral fortitude.
This essay was written from a prompt by Ollama using the Phi3 model
Everywhere you look, someone is talking or writing about artificial intelligence. I have been keenly interested in the topic since my graduate school days in the 1990s. I have used ChatGPT, Microsoft Copilot, Claude, Stable Diffusion, and other AI software to experiment with how this technology works and satisfy my innate curiosity. Recently, I discovered Ollama. Developed by Meta, it is an open-source large language model that can run locally on Linux, MacOS, and Microsoft Windows. There is a great deal of concern that while using LLMs in the cloud, your data is being scraped and reused by one of the major technology companies. Ollama is open-source and has an MIT license. Since Ollama runs locally, there is no danger that your work could end up in someone else’s LLM.
The Ollama website proclaims, “Get up and running with Large Language Models.” That invitation was all I needed to get started. Open a terminal on Linux and enter the following to install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
The project lists all the models that you can use, and I chose the first one in the list, Llama3.1. Installation is easy, and it did not take long to install the Llama3.1 model. I followed the instructions and, in the terminal, entered the following command:
$ ollama run llama3.1
The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was prompted to ‘talk’ with the LLM. I decided to ask a question about the history of my alma mater, St. Bonaventure University. I entered the following commands:
$ ollama run llama3.1
>>>What is the history of St. Bonaventure University?
The results were good but somewhat inaccurate. “St. Bonaventure University is a private Franciscan university located in Olean, New York. The institution was founded by the Diocese of Buffalo and has a rich history dating back to 1856.” St. Bonaventure is located near Olean, New York, and it is in the Diocese of Buffalo, but it was founded in 1858. I asked the model to name some famous St. Bonaventure alumni; more inaccuracies were comic. Bob Lanier was a famous alumnus but Danny Ainge was not.
The results are rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air which I soon did. I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.
Unsurprisingly, Ollama uses a great deal of processing power, which is lessened if you run it on a computer with a GPU. My Intel NUC 11 is a very powerful desktop computer with quad-core 11th Gen Intel Core i7-1165G7, 64 gigabytes of RAM, and a robust connection to the internet to download additional models. I posed similar questions to the Llama3.1 model first on the Intel running Linux and then on the M2 MacBook Air running MacOS. You can see the CPU utilization below on my Linux desktop. It’s pegged, and the output from the model is slow at an approximate rate of 50 words per minute. Contrast that with the M2 MacBook, which has a GPU with a CPU utilization of approximately 6.9% and words per minute faster than I could read.
While Ollama Llama3.1 might not excel at history recall, it does very well when asked to create Python code. I entered a prompt to create Python code to create a circle without specifying how to accomplish the task. It rendered the code shown below. I had to install the ‘pygame’ module, which is not on my system.
$ sudo apt install python3-pygame
# Python Game Development
import pygame
from pygame.locals import *
# Initialize the pygame modules
pygame.init()
# Create a 640x480 size screen surface
screen = pygame.display.set_mode((640, 480))
# Define some colors for easy reference
WHITE = (255, 255, 255)
RED = (255, 0, 0)
while True:
# Handle events
for event in pygame.event.get():
if event.type == QUIT or (event.type == KEYDOWN and event.key ==
K_ESCAPE):
pygame.quit()
quit()
screen.fill(WHITE) # Fill the background with white color
# Drawing a circle on the screen at position (250, 200), radius 100
pygame.draw.circle(screen, RED, (250, 200), 100)
# Update the full display Surface to the screen
pygame.display.flip()
I copied the code into VSCodium and ran it. You can see the results below.
As I continue experimenting with Ollama and other open-source LLMs, I’m struck by the significance of this shift toward local, user-controlled AI. No longer are we forced to rely on cloud-based services that may collect our data without our knowledge or consent. With Ollama and similar projects, individuals can harness the power of language models while maintaining complete ownership over their work and personal information. This newfound autonomy is a crucial step forward for AI development and I’m eager to see where it takes us.
A nation weeps, a mother cries,
Tiny hands clutch at crimson skies.
“Just keep them safe,” they whisper low,
But hollow words, the truth don’t show.
In halls of power, shadows dance,
While children tremble, afraid to glance.
Gutless men, in ivory towers,
Bow to pressure, silencing powers.
They cater to a lobby’s plea,
For profits made and safety fee.
Blind eye they turn on innocent lives,
Clandestine deals beneath the tides.
The echo of their promises rings hollow,
A symphony of fear in every poll.
While school halls become battlegrounds rife,
And innocence fades into the knife.
How can they stand and claim “common sense,”
When lives are lost in this gun-fueled trance?
Their cowardice a bitter pill to swallow,
As innocence bleeds on the playground’s hollow.
They preach of freedom, but at what cost?
While children pay for their moral frost.
Rise from slumber, break the chains of fear,
Stand for justice, let the truth be clear!
Poem from Gemma2 Large Language Model from a writing prompt. “Write a poem about the plight of gutless politicians craven to the gun lobby while children lives are endangered in America’s classrooms.”
Just yesterday, I came across Ollama, an open-source project developed by Meta. I had been helping a friend learn more about locally hosted large language models, and my internet search led me to Ollama. I installed it quickly on my Linux Mint desktop using the code supplied on the community website.
curl -fsSL https://ollama.com/install.sh | sh
It didn’t take too long to install the software and then I wanted to explore how it worked. I decided to download and install the recommended Llama3.1 model. I followed the instructions and in the terminal entered the following command:
$ ollama run llama3.1
The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was presented with a prompt to ‘talk’ with the LLM.
>>> What is the history of St. Bonaventure University?
“St. Bonaventure University, a private Franciscan university located in Allegany County, New York, has a rich and storied history that spans over 160 years.”
It’s not bad, but it’s not entirely accurate either. The results were rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air.
I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click on the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.
I ran the same commands on the Mac to install Llama3.1 the first time. I asked for the history of St. Bonaventure University in Olean, New York. The results appeared much more quickly, but their accuracy was skewed even more. They were so inaccurate that they were comical.
“The institution was founded in 1858 by five Jesuit priests from the Buffalo Diocese as St. Bonaventure College. The college was established on a 200-acre farm in Deerpark, New York, with the goal of providing a Catholic education to young men.”
I downloaded and installed the Gemma 2 model from Google to see if it was faster and more accurate. The commands are the same whether using the Linux or Mac terminal.
$ ollama run gemma2:2b
Gemma 2:2b is the smallest model, at 1.6 GB, and it was downloaded and installed quickly. The results were similarly inaccurate, but they rendered much more quickly on both the Mac and Linux computers. Llama3.1 was only mildly accurate for information that occurred in 2021 and prior. I asked Gemma2 who was running for President of the United States and its information was only accurate as of October 26, 2023.
Despite the results, I am encouraged that it is possible to run large language models on PCs.
I’m always shopping for laptops to outfit with Linux and one of my favorite stops is DellRefurbished.com. On the weekend while surfing that site I came upon a good deal on a Dell Latitude 5410 laptop. The (new to me machine) came with an i7 processor. Using the ‘inxi’ command it easy to see that this laptop was a good bargain at forty-five percent off a list price of $399.
The battery is in great shape for a three-year-old laptop that weighs a little bit over three pounds. I’ve been using Dell Computers with Linux loaded on them for over twenty years. However, lately, these Latitude 5410s have come with Intel Rapid Storage Technology for use with the Windows 10 Professional operating system that came pre-installed on the laptop. Linux Mint would not install when starting from a live USB drive.
The solution that I quickly found when upgrading one of these laptops a couple of weeks ago is as follows:
When starting up the Latitude press the F2 function key to enter the BIOS and look for the SATA controls and change from RST to AHCI. Save the settings in your BIOS restart the computer and complete the installation of Linux Mint.
For more information on this particular issue, you should consult the excellent documentation from Ubuntu. https://help.ubuntu.com/rst/
I volunteer with Meals on Wheels once a week, and one of the clients on my route is a man my age. We frequently spend time visiting when I bring his meal to him. Last week, when I stopped, he asked me if I could help him put his laptop computer back in working order. I told him I’d do him one better—installing Linux Mint on his ailing computer and extending its life.
After checking with him about the model of his laptop, I discovered that it was a lightly powered notebook with minimal disk storage. I had a five-year-old System76 Darter Pro just sitting around gathering dust. I decided to install Linux Mint Cinnamon 22, which is my daily driver. Though the Darter Pro is five and a half years old, it’s still got lots of life in it. It has an Intel i7 CPU with 16 gigabytes of RAM and a 250 gigabyte NVME drive, which would be a fantastic host for my client.
When I shared my decision with him, he was delighted, and though he trusted my judgment, he knew nothing about Linux Mint and wondered if he could master the learning curve. I told him that Mint Cinnamon has many similarities with the Windows 10 desktop he used to. I searched YouTube and other sites for an excellent introduction to Linux Mint and could not find the kind of documentation I wanted to give to my friend. I decided to write my documentation, complete with screenshots of the critical first steps to get started with a Linux Mint laptop.
This is the first screen you see after you start up the laptop. His name is Tom, and I shared his password in the documentation. I knelt next to him as I helped him through these first steps.
The next thing he sees is the Welcome to Linux Mint splash screen. I provide a brief explanation and then encourage him to return here later to refer to the documentation and help information.
I direct his attention to the screen’s lower right-hand corner and guide him through the connection to his wireless access point and network connection.
I explained to him that the fourth item from the left is the ‘network connection’ icon, and he needs to click on it to display the wireless networks that are available to connect with. We quickly found his local connection.
He was unsure of his wireless password, but after some reflection and connecting with others who connect to his internet service provider, I remembered that they use the client’s mobile phone number for a password. That worked well, and we were connected. I walked him through installing the necessary updates on his new Linux laptop. I quickly demonstrated the security of Linux systems as each update required his password. Linux Mint comes with Firefox by default but I also showed him how to add the Google Chrome browser and some other applications using the software manager.
Eventually, I showed him how to use most of the essential tools of this new Linux Mint computer, including ‘Nemo’, the Linux Mint file manager.
After almost forty-five minutes of initial setup and instruction, he was ready to dive in and experience Linux Mint firsthand. This morning, I received an email from Tom, and it warmed my heart. He said, “I was on my new computer till 11:00 last night. I’m like a kid with a new toy. I want to thank you for the info you sent on linux.I perused the emails and quickly learned I would never be a Linux expert…Again, a thousand thanks for the laptop. I was in a bind, and you really got my feet from the fire. Sincerely, tom.”
His email made my day, so I love to share the Linux and open-source journey with others. We can extend the hardware’s life and demonstrate the effectiveness of the open-source approach.