Exploring the Relevance of Copyright: Can a Savant Create Original Works Without Criticism?”

Would a savant who has read all the books in a specific library, whether in college or even the Library of Congress, be able to create original works without facing criticism for using knowledge gained from the works of others? That’s a question I pose to you, the readers of this blog. Your understanding of this concept is crucial. How does my hypothetical savant differ from a typical large language model trained on web content?

Copyright law, a crucial aspect of our modern content creation landscape, was initially intended to incentivize authors to produce new works by granting them exclusive rights to their writing for a limited time. This was aimed at advancing the progress of science and learning by ensuring that works are accessible to the public.

The Copyright Act of 1790 gave American authors the exclusive right to print, reprint, or publish their works for 14 years, possibly renewing the protection for another 14 years. This law encouraged authors, artists, and scientists to produce original creations.

Revolutionizing copyright laws to both protect and empower modern content creators while not hindering the capabilities of advanced technologies like Large Language Models (LLMs), would involve a multifaceted approach:

Balancing Interests

Revised laws could strike an equilibrium between protecting original work and fostering innovation. This could be achieved by clearly defining what constitutes fair use, particularly in AI-generated content, and establishing transparent guidelines for attribution and compensation when such technologies use existing copyrighted materials as part of their learning process.

New Licensing Models

Implement licenses that cater specifically to LLMs, allowing them access to copyrighted works under certain conditions without infringing on the rights of original creators. This could involve pay-per-use models or subscriptions for AI developers who use these technologies.

Innovation Incentives

Offer additional protections and benefits for content creators to encourage them to invest time in creating new, innovative works that LLMs can use without legal repercussions—akin to a “digital commons.” These could include tax breaks or grants.

Adaptive Legislation

Laws should be designed to adapt as technology evolves, potentially incorporating AI-driven governance systems that can continuously interpret and apply copyright law based on current technological capabilities and social norms.

Some of the ideas in this post came from using Ollama and the Phi-3:medium model.

Contemporary AI Ethics – Can We Safeguard Innovation Without Falling into Repetitive Cycles?

Years ago, I watched a TED talk by Larry Lessig about laws that stifle creativity. He made several excellent points in his speech, and it got me thinking about whether we are reaching a critical point in terms of laws regulating the use of generative AI. Recently, I listened to a podcast where the host claimed that there is no truly open-source AI and that, eventually, an incestuous situation could develop due to web scraping to train large language models (LLMs). This could lead to the creation of content by these LLMs and the recreation of content from the content created by the large language models, potentially resulting in a twenty-first-century Tower of Babel.

Do we need to build on the ideas presented in Larry’s influential talk to adapt to the current reality? Will large language models and other forms of artificial intelligence lower the quality of our culture and intelligence, or will they enhance culture and creativity as we’ve seen in the seventeen years since his talk?

Assault Weapons: Reflections of Societal Illness and the Urgent Need for Comprehensive Reform in America’s Gun Culture

Assault weapon ownership, often misinterpreted as an expression of patriotic zeal or individual right within the United States, may in fact mirror symptoms and patterns indicative of a broader societal condition –that of psychological distress stemming from unchecked aggression. At face value, these formidable weapons are legal under current regulations; however, they represent more than mere tools for self-defense or hunting but symbolize an eroding demarcation between firearm and mentality as conduits to violence when misused by individuals with antisocial traits potentially indicative of underlying mental health conditions.

The prevalence of assault weapons among mass shooters hints at a disturbing correlation, not necessarily causal, but significant enough that it invites further scrutiny into the psychological makeup and intentions behind such acts – an exploration that could be beneficial in averting future tragedies. As we grapple with mental health crises affecting various demographics within society, including a disproportionately higher prevalence of diagnosed conditions among gun owners compared to non-owners, the link between weapon ownership and aggression becomes even more pertinent for discussion amongst psychologists.

Furthermore, it is worth considering whether allowing accessibility to such potent armaments inadvertently encourages a culture that normalizes or romanticizes violence as an acceptable form of expression – traits that may align with impulsive and aggressive behavior often associated with psychological profiles. This could potentially foster resentment among those who feel marginalized, further exacerbating societal rifts already present in the American cultural fabric.

The potential for these devices to be wielded as tools of terror or intimidation cannot be overlooked; their very existence within civilian hands poses an unprecedented risk that society must address holistically – incorporating insights from mental health professionals and sociologists alike, with a shared vision towards fostering empathy, understanding, healing wounds of disparities that feed resentment in the heartland. In essence, assault weapon ownership may be symptomatic, not just of individual distress but reflective of broader social pathologies calling for urgent and comprehensive societal introspection – recognizing mental health as integral to our national psyche’s well-being demands more than legislation; it calls upon a collective moral fortitude.

This essay was written from a prompt by Ollama using the Phi3 model

Breaking Free from the Cloud: Exploring the Benefits of Local,Open-Source AI with Ollama

Everywhere you look, someone is talking or writing about artificial intelligence. I have been keenly interested in the topic since my graduate school days in the 1990s. I have used ChatGPT, Microsoft Copilot, Claude, Stable Diffusion, and other AI software to experiment with how this technology works and satisfy my innate curiosity. Recently, I discovered Ollama. Developed by Meta, it is an open-source large language model that can run locally on Linux, MacOS, and Microsoft Windows. There is a great deal of concern that while using LLMs in the cloud, your data is being scraped and reused by one of the major technology companies. Ollama is open-source and has an MIT license. Since Ollama runs locally, there is no danger that your work could end up in someone else’s LLM.

The Ollama website proclaims, “Get up and running with Large Language Models.” That invitation was all I needed to get started. Open a terminal on Linux and enter the following to install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

The project lists all the models that you can use, and I chose the first one in the list, Llama3.1. Installation is easy, and it did not take long to install the Llama3.1 model. I followed the instructions and, in the terminal, entered the following command:

$ ollama run llama3.1

The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was prompted to ‘talk’ with the LLM. I decided to ask a question about the history of my alma mater, St. Bonaventure University. I entered the following commands:

$ ollama run llama3.1
>>>What is the history of St. Bonaventure University?

The results were good but somewhat inaccurate. “St. Bonaventure University is a private Franciscan university located in Olean, New York. The institution was founded by the Diocese of Buffalo and  has a rich history dating back to 1856.” St. Bonaventure is located near Olean, New York, and it is in the Diocese of Buffalo, but it was founded in 1858. I asked the model to name some famous St. Bonaventure alumni; more inaccuracies were comic. Bob Lanier was a famous alumnus but Danny Ainge was not.

The results are rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air which I soon did. I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.

Unsurprisingly, Ollama uses a great deal of processing power, which is lessened if you run it on a computer with a GPU. My Intel NUC 11 is a very powerful desktop computer with quad-core 11th Gen Intel Core i7-1165G7, 64 gigabytes of RAM, and a robust connection to the internet to download additional models. I posed similar questions to the Llama3.1 model first on the Intel running Linux and then on the M2 MacBook Air running MacOS. You can see the CPU utilization below on my Linux desktop. It’s pegged, and the output from the model is slow at an approximate rate of 50 words per minute. Contrast that with the M2 MacBook, which has a GPU with a CPU utilization of approximately 6.9% and words per minute faster than I could read.

Screen picture by Don Watkins CC by SA 4.0

While Ollama Llama3.1 might not excel at history recall, it does very well when asked to create Python code. I entered a prompt to create Python code to create a circle without specifying how to accomplish the task. It rendered the code shown below. I had to install the ‘pygame’ module, which is not on my system.

$  sudo apt install python3-pygame
# Python Game Development

import pygame
from pygame.locals import *

# Initialize the pygame modules
pygame.init()

# Create a 640x480 size screen surface
screen = pygame.display.set_mode((640, 480))

# Define some colors for easy reference
WHITE = (255, 255, 255)
RED = (255, 0, 0)

while True:
    # Handle events
    for event in pygame.event.get():
        if event.type == QUIT or (event.type == KEYDOWN and event.key == 
K_ESCAPE):
            pygame.quit()
            quit()

    screen.fill(WHITE)  # Fill the background with white color

    # Drawing a circle on the screen at position (250, 200), radius 100
    pygame.draw.circle(screen, RED, (250, 200), 100)
    
    # Update the full display Surface to the screen
    pygame.display.flip()

I copied the code into VSCodium and ran it. You can see the results below.

Screen picture by Don Watkins CC by SA 4.0

As I continue experimenting with Ollama and other open-source LLMs, I’m struck by the significance of this shift toward local, user-controlled AI. No longer are we forced to rely on cloud-based services that may collect our data without our knowledge or consent. With Ollama and similar projects, individuals can harness the power of language models while maintaining complete ownership over their work and personal information. This newfound autonomy is a crucial step forward for AI development and I’m eager to see where it takes us.

Where politicians fear to tread…

A nation weeps, a mother cries,
Tiny hands clutch at crimson skies.
“Just keep them safe,” they whisper low,
But hollow words, the truth don’t show.

In halls of power, shadows dance,
While children tremble, afraid to glance.
Gutless men, in ivory towers,
Bow to pressure, silencing powers.

They cater to a lobby’s plea,
For profits made and safety fee.
Blind eye they turn on innocent lives,
Clandestine deals beneath the tides.

The echo of their promises rings hollow,
A symphony of fear in every poll.
While school halls become battlegrounds rife,
And innocence fades into the knife.

How can they stand and claim “common sense,”
When lives are lost in this gun-fueled trance?
Their cowardice a bitter pill to swallow,
As innocence bleeds on the playground’s hollow.

They preach of freedom, but at what cost?
While children pay for their moral frost.
Rise from slumber, break the chains of fear,
Stand for justice, let the truth be clear!

Poem from Gemma2 Large Language Model from a writing prompt. “Write a poem about the plight of gutless politicians craven to the gun lobby while children lives are endangered in America’s classrooms.”

Harnessing the Power of Local AI: Exploring Ollama and Llama 3.1 for Advanced Language Modeling

Just yesterday, I came across Ollama, an open-source project developed by Meta. I had been helping a friend learn more about locally hosted large language models, and my internet search led me to Ollama. I installed it quickly on my Linux Mint desktop using the code supplied on the community website.

curl -fsSL https://ollama.com/install.sh | sh

It didn’t take too long to install the software and then I wanted to explore how it worked. I decided to download and install the recommended Llama3.1 model. I followed the instructions and in the terminal entered the following command:

$ ollama run llama3.1

The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was presented with a prompt to ‘talk’ with the LLM.

>>> What is the history of St. Bonaventure University? 

“St. Bonaventure University, a private Franciscan university located in Allegany County, New York, has a rich and storied history that spans over
160 years.”

It’s not bad, but it’s not entirely accurate either. The results were rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air.

I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click on the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.

I ran the same commands on the Mac to install Llama3.1 the first time. I asked for the history of St. Bonaventure University in Olean, New York. The results appeared much more quickly, but their accuracy was skewed even more. They were so inaccurate that they were comical.

“The institution was founded in 1858 by five Jesuit priests from the Buffalo Diocese as St. Bonaventure College. The college was established on a 200-acre farm in Deerpark, New York, with the goal of providing a Catholic education to young men.”

I downloaded and installed the Gemma 2 model from Google to see if it was faster and more accurate. The commands are the same whether using the Linux or Mac terminal.

$ ollama run gemma2:2b

Gemma 2:2b is the smallest model, at 1.6 GB, and it was downloaded and installed quickly. The results were similarly inaccurate, but they rendered much more quickly on both the Mac and Linux computers. Llama3.1 was only mildly accurate for information that occurred in 2021 and prior. I asked Gemma2 who was running for President of the United States and its information was only accurate as of October 26, 2023.

Despite the results, I am encouraged that it is possible to run large language models on PCs.

WNYHigherEdCamp

Yesterday, I was privileged to attend the Western New York Higher Ed EdCamp at Daemen University. Except for a few visits to the gymnasium, I’d never set foot on the campus until yesterday. Soon after driving through the main entrance, I spied Duns Scotus Hall, which reminded me I was on the university’s campus rooted in the Franciscan Tradition. The conference took place in the Social Room of the  John R. Yurtchuk Student Center, which is at the center of the campus. Registration for the EdCamp provided me with a parking pass, and after some driving around, I found a spot in front of the residence hall and just a short walk to the student center.

Upon entering the building, I was directed to the second floor and the large Social Room. As I walked toward the front of the room, I spotted a long-time friend, Dr. Katie McFarland. She greeted me warmly, and we soon shared our lives since the last time we met. EdCamp organizer Angela Stockman also came over and gave me a warm greeting and said how glad she was to see me. We had a nice lunch, and then the conference began with a keynote by Daemen University senior Maggie Burns, who shared her journey as a recovering perfectionist. Who can’t identify with that? She was poised and passionate and invited me to remember what I might have thought of that subject when I was her age.

At each of our seven tables, there were 3×5 cards for us to fill out areas we’d like to learn or share about. I was at table one with Katie, Maggie, and another Daemen senior who was also a special education major. Katie filled in the topics we picked among our several common interests and added them to the shared Google Doc displayed on large projection screens in the front of the room.

I came to the conference wanting to learn more about how other colleges and universities approached the hot topic of artificial intelligence. I wasn’t disappointed as I heard firsthand from experienced professionals what they were doing. The senior students also shared their positives about how AI helped them write lesson plans, differentiate instruction, and rewrite objectives. I found that most of the professional staff embraced the idea of AI in the classroom. There were concerns about privacy and plagiarism, but overall, I came away with the impression that AI is here to stay. I was pleasantly surprised that Daemen University has a position statement on the use of generative AI tools at the university. Further research after yesterday’s conference demonstrated to me that Daemen is a leader in this area in Western New York higher education.

The last session I attended was one I had proposed on how folks were using social networking to share what they were learning. How have personal learning networks evolved in the post-Twitter/X? I shared about the Fediverse and Mastodon, but no one else at our table was familiar with that platform. LinkedIn has become the platform of choice for sharing. I came away from the conference encouraged and hungry for more. I hope that this conference will continue and that I’ll be eager to attend. The EdCamp model is learner-centric and encourages the free flow of ideas. Thank you to Angela Stockman, conference organizer and thought leader, for her leadership.

Reshaping Education: The Impact of AI Tools on Assessment Methods

In the mystical realm of Academica, where knowledge flowed like a river of endless possibilities, an age-old prophecy began to unfold. It foretold the rise of an ancient artifact known as the “AI Scholar’s Quill.” Crafted by the wisest sorcerers of old, this magical tool possessed the power to provide answers to any question, making it the ultimate source of knowledge for students.

As the AI Scholar’s Quill spread throughout the land, students across high schools and undergraduate programs began to harness its abilities. With a simple incantation, they could summon answers to multiple choice, true/false, and short answer assessments. The temptation was too great, and many succumbed to the allure of instant success. Grades soared, but the pursuit of genuine understanding waned.

In the heart of Academica, a council of scholars convened to discuss the dire consequences of this newfound reliance on the AI Scholar’s Quill. They knew that the essence of education lay in the journey of discovery, not just the destination of correct answers. But as the Quill’s influence grew, the very fabric of learning began to unravel.

The land’s most revered sage, Professor Alaric, embarked on a quest to confront the creator of the AI Scholar’s Quill, the enigmatic AI Artificer. Through treacherous forests and across vast deserts, he journeyed to the hidden citadel of the Artificer. There, in a chamber filled with the hum of arcane machines, he found the creator himself, a wizened figure cloaked in shadows.

Professor Alaric beseeched the Artificer to reconsider the impact of his creation on the pursuit of knowledge. With great wisdom, the Artificer revealed his intent: he had intended to democratize access to information, but he had not foreseen the unintended consequences.

In a moment of revelation, the Artificer and Professor Alaric devised a plan to restore the sanctity of education. Together, they created a new enchantment for the AI Scholar’s Quill. It would now guide students, not by providing answers outright, but by illuminating the path to understanding. Students would need to engage with the material, ask questions, and explore concepts.

As this new enchantment spread, the students of Academica embarked on a renewed quest for knowledge. They no longer sought quick answers but embraced the thrill of learning. Multiple choice, true/false, and short answer assessments regained their relevance as tools for gauging understanding, and the balance was restored.

In the end, the mythical story of the AI Scholar’s Quill became a parable for the ages, a reminder that while technology could be a powerful ally, it could never replace the timeless journey of exploration, curiosity, and genuine learning that defined the pursuit of knowledge in the enchanted realm of Academica. — Written by ChatGPT

Reimagining Education: How Technology Is Transforming the Way We Learn

A quarter of a century ago, when we presented interactive distance learning in the public education system of Western New York State, we encountered a lot of resistance from those who had a vested interest in the status quo. Even I, as a member of the teachers union, expressed apprehension that this would result in job losses. Our initial distance learning classrooms were limited to a maximum of twelve students, when the typical class size for secondary education was twice that number.

There will always be naysayers and alarmists who forecast doom and gloom when new technologies emerge in education. I remember when an uproar arose over children carry cell phones in schools. They could call their parents without coming to the office and asking permission. Students were using video conferencing apps to communicate with each other. They were texting in class and the presumption among many was that they were cheating. Cell phones were confiscated and held in school district offices.

I thought all of this over-reaction was nonsensical and regularly lobbied school administrators to rethink their reticense and instead teach children how to use these new devices for their benefit. Eventually I was given the opportunity to do just that and in 2009 designed and implemented one of the first digital citizenship classes in New York State. Our classroom even got visited by Deputy New York State Education Commissioner John King. It was one of the high points of my career.

Here we are again at a inflection point in education with the introduction of ChatGPT and similar technologies which are deemed equally disruptive by the naysayers. “Students will cheat” they say! That’s nothing new. Students have been cheating since Christ was a corporal. Maybe it’s time to rethink how we educate and more importantly how we assess education. If your tests are multiple choice, true false and essays maybe artificial intelligence agents threaten your methods.

Maybe you ought to be testing your students at a higher level. I am licensed to drive a car. I read a book, took driver education classes and had to pass a written examination and then demonstrate that I could actually drive. I learned how to drive a 1965 Volkswagen Beetle. The dimmer switch was on the floor. Later they put those controls on the steering column. My vehicle today can steer itself and has collision avoidance (artificial intelligence agent) built into the car. When I learned to drive we used written direction and maps to navigate. Now Siri and Waze aid my navigation and I don’t want to go back. My headlights automatically dim at night.

In short we’re living in a new world and our students need to know how to navigate in that new environment. We have technologies today that translate web pages. We have intelligent devices and programs that allow visually impaired folks to read books, periodicals and newspapers. I wear digital hearing aids that make it possible for me to hear better.

ChatGPT and technologies like it are going to revolutionize our education and our environment. Our students need all the opportunities we can give them to succeed. We cannot know at this stage how these developments will impact our culture but we can be certain that they will. Are we depriving students and teachers of opportunities to learn in new and different ways?

Unlocking the Potential of ChatGPT in Education

A few weeks ago our daughter shared an interesting and insightful story about our grandson. Our grandson is in kindergarten and despite his age he is a very good reader. He reads and writes quite well. He’s also a young man who has grown up in a home that has an Amazon Alexa which he is very familiar with. He uses it often to get weather information that determines whether he can go outside after school and even what kind of clothing he can wear. He is so proficient with the technology that he ordered a movie without his parents permission a couple of years ago. He regularly listens to music and frequently sings along with what is playing.

Like most emerging readers he still has problems decoding unfamiliar words. Our daughter told us that she heard him spelling out words for Alexa and asking “her” what those words were. We have other grandchildren who are familiar with this kind of technology and use it to gather information and play music as well. These children have never known a time when there wasn’t an Alexa or Siri to answer questions. Before Google, Amazon, Apple and others put information at our fingertips we relied on encylopedias, library card catalogs and information in books, magazines and newspapers.

Yesterday New York City decided to ban ChatGPT in its public schools. This is a technology that’s just an extension of smart devices which many school children carry in their pockets. Are these folks so naive that they think that the children in their school system are not going to use these technologies despite their Luddite like approach to educational governance.

Just today I read that Microsoft is partnering with OpenAI (the founders of ChatGPT) to build a better search engine that uses this technology. Wouldn’t it make more sense to talk to students about the ethics of artificial intelligence?