Using Ollama to write an evaluation rubric

In my previous article, I discussed using Ollama and the Llama3.1 model to assess my writing. Since I’m new to teaching writing, I was curious to see if Ollama could generate an evaluation rubric for grading student writing and then use that rubric to evaluate a sample of student writing.

I am going to assume that you have already installed Ollama on your computer and downloaded the Llama3.1 model. If not, be sure to follow the directions provided in the previous article. Start Ollama and run the model:

$ ollama run llama3.1:latest

>>>Develop a assesment rubric for the evaluation of undergraduate student writing

Here is the assessment rubric that was created by Ollama and Llama3.1:

Content (40 points)

  • Thesis Statement: Does the thesis statement clearly and concisely present the main argument or claim? (5 points)
    • 4: The thesis statement is clear, concise, and well-supported by evide evidence.
    • 3: The thesis statement is mostly clear and concise, but may lack some supporting evidence.
    • 2: The thesis statement is somewhat unclear or lacking in concision, but has some supporting evidence.
    • 1: The thesis statement is unclear, vague, or lacks supporting evidence.
  • Argumentation: Are the arguments presented well-supported by evidence? (10 points)
    • 9-10: Strong arguments are presented with ample and relevant evidence to support them.
    • 7-8: Good arguments are presented, but may lack some relevant evidence.
    • 5-6: Fair arguments are presented, but may be lacking in supporting ev evidence.
    • 0-4: Poor or absent argumentation.
  • Analysis: Is the analysis of the topic thorough and insightful? (10 points)
    • 9-10: The analysis is thorough, well-supported, and offers new insights into the topic.
    • 7-8: Good analysis is presented, but may lack some depth or insight.
    • 5-6: Fair analysis is presented, but may be lacking in detail or insig insight.
    • 0-4: Poor or absent analysis.
  • Counterarguments: Are counterarguments addressed and refuted effectively? (5 points)
    • 4: Effective counterarguments are presented and refuted with strong ev evidence.
    • 3: Some counterarguments are acknowledged, but not fully refuted.
    • 2: Few counterarguments are mentioned or refuted poorly.
    • 1: No counterarguments are addressed.

Organization (20 points)

  • Introduction: Is the introduction engaging and effective in setting up the rest of the paper? (5 points)
    • 4: The introduction is clear, concise, and effectively sets up the rest of the paper.
    • 3: The introduction is mostly clear and concise, but may lack some eng engagement or clarity.
    • 2: The introduction is somewhat unclear or lacking in concision.
    • 1: The introduction is weak or absent.
  • Body Paragraphs: Are the body paragraphs well-organized and logically connected? (10 points)
    • 9-10: Strong organization and logical connections between paragraphs are evident.
    • 7-8: Good organization and some logical connections, but may be lacking in coherence.
    • 5-6: Fair organization and some attempts at connecting paragraphs.
    • 0-4: Poor or absent organization.
  • Conclusion: Is the conclusion effective in summarizing the main points and reiterating the thesis? (5 points)
    • 4: The conclusion is clear, concise, and effectively summarizes the main points.
    • 3: The conclusion is mostly clear and concise, but may lack some summary or repetition of the thesis.
    • 2: The conclusion is somewhat unclear or lacking in concision.
    • 1: The conclusion is weak or absent.

Style (20 points)

  • Writing Style: Is the writing style clear, concise, and engaging? (10 points)
    • 9-10: Strong writing style is evident, with clear, concise language and engaging tone.
    • 7-8: Good writing style is present, but may lack some clarity or conci concision.
    • 5-6: Fair writing style is present, but may be lacking in clarity or e engagement.
    • 0-4: Poor or absent writing style.
  • Mechanics: Are grammar, spelling, and punctuation accurate? (5 points)
    • 4: Strong attention to detail and accuracy in mechanics.
    • 3: Good attention to detail and some accuracy, but may be lacking in o others.
    • 2: Fair attention to detail, with some errors in mechanics.
    • 1: Poor or absent attention to detail.

Mechanics (20 points)

  • Grammar: Is grammar accurate and effective? (5 points)
    • 4: Strong grammar is present throughout the paper.
    • 3: Good grammar is mostly present, but may be lacking in some areas.
    • 2: Fair grammar is present, with some errors.
    • 1: Poor or absent grammar.
  • Spelling: Is spelling accurate and effective? (5 points)
    • 4: Strong spelling is evident throughout the paper.
    • 3: Good spelling is mostly present, but may be lacking in some areas.
    • 2: Fair spelling is present, with some errors.
    • 1: Poor or absent spelling.
  • Punctuation: Is punctuation accurate and effective? (5 points)
    • 4: Strong punctuation is evident throughout the paper.
    • 3: Good punctuation is mostly present, but may be lacking in some areas.
    • 2: Fair punctuation is present, with some errors.
    • 1: Poor or absent punctuation.

Total: The total score will range from 0 to 100 points.

I copied and pasted the text from the article I wrote in 2022 about 12 Essential Linux Commands for Beginners and asked Ollama to evaluate that article using the rubric it had just created. Here is the result:

Content

  • The article provides a comprehensive list of 12 essential Linux commands, along with explanations, examples, and usage tips.
  • The topics covered are relevant to both beginners and experienced users.
  • The tone is conversational and easy to understand.

Organization

  • The article is well-structured, with each command introduced as a separate section.
  • The sections are concise and to the point, making it easy for readers to quickly grasp the key concepts.
  • The use of headings and subheadings helps to break up the content and make it more readable.

Writing Style

  • The writing style is clear, concise, and engaging.
  • The author uses everyday language, avoiding technical jargon whenever possible.
  • The text includes humorous anecdotes and personal experiences, making it more relatable and entertaining.

Structure and Pacing

  • The article flows smoothly from one command to the next, with each section building on the previous one.
  • The pacing is well-balanced, with sufficient information provided for readers to understand the concepts without feeling overwhelmed.
  • The use of examples and exercises helps to reinforce learning and keep readers engaged.

Technical Quality

  • The technical accuracy of the article appears to be high, with correct usage examples and syntax.
  • The author provides sufficient context and explanations for each command, making it easy for readers to understand their functionality.
  • The inclusion of links to external resources (e.g., trashy and trash-cli) adds value and helps readers explore further.

Originality and Depth

  • While the article covers familiar topics, it offers a fresh perspective and new insights into each command.
  • The author provides additional context and explanations that go beyond mere listing of commands.
  • The inclusion of personal anecdotes and experiences adds depth and relatability to the content.

Conclusion and Future Work

  • The article concludes with an open-ended question, encouraging readers to share their favorite Linux commands.
  • This approach fosters engagement and encourages further discussion, which can be beneficial for building a community around Linux and its usage.

Overall, I would give this article a score of 4.5 out of 5 in content, organization, writing style, structure and pacing, technical quality, originality and depth, and conclusion and future work. The only area where it could improve is by providing more advanced or specialized topics for experienced users..

What are your thoughts on using open source LLMs to evaluate student writing? What are the ethical considerations? Have you used a large language model to evaluate student writing?

Using Ollama to evaluate what you write

Last year, a colleague shared with me how he used ChatGPT to evaluate content generated by his students at a local university. I was intrigued by the idea but also concerned about the ethical issues surrounding using publicly available LLMs to read student writing without their consent. This led me to look for locally hosted LLMs that could aid in grading and evaluating student work without putting their work in a publicly accessible large language model.

I am not qualified to evaluate writing, but I do use Grammarly to help me with my writing. Are there open-source tools in addition to proprietary products like Grammarly that could aid me in the writing process, and could they be successfully used by university staff and students to improve writing outcomes?

My experience with Ollama led me to consider using it to evaluate my writing and suggest to others that it could be used to assess student writing. If you haven’t already done so, you will need to install Ollama. The commands below will install it on Linux, which is my daily driver. There are detailed instructions on the Ollama website to help you download and install it on MacOS and Windows.

curl -fsSL https://ollama.com/install.sh | sh

I decided to use the Llama3.1 model for this project. I issued the following series of commands in the terminal to launch Ollama, download the Llama3.1 model and then run that model.

$ ollama pull llama3.1 

$ ollama run Llama3.1

>>>Evaluate the following writing sample,"

I copied and pasted an article I had written into the Ollama REPL (Read Evaluate Print Loop) and pressed Enter. The fourteen-hundred-word article was written in 2016 about a lecture our Franciscan Fraternity had attended on the similarities between our faith and Islam. The evaluation took several minutes on my powerful Linux desktop, which has a quad-core 11th Gen Intel Core i7-1165G7 with 64 gigabytes of RAM but lacks a GPU, which is very important when using Ollama and these models. Nonetheless, it completed the task and provided a three-hundred-fifty-five-word writing analysis. A sample of the analysis is provided below.

“This writing sample appears to be a reflective essay or report on a lecture by Fr. Michael Calabria about Islam from a Catholic perspective. Here’s an evaluation:

Strengths:

Clear structure: The text is well-organized, with a logical flow of ideas and a clear conclusion.

Engaging narrative: The writer effectively conveys the content of the lecture, making it easy to follow and understand.

The analysis provided a list of weaknesses and suggestions for improvement.

Weaknesses:

Lack of a clear thesis statement: The writer’s purpose or main takeaway from the lecture is not explicitly stated, leaving the reader to infer the significance of the event.

Lengthy and detailed report: While the narrative is engaging, the text feels overly long and detailed, which may make it challenging for some readers.

Suggestions for improvement:

Edit for clarity and consistency: Review the text for minor errors in formatting, punctuation, and grammar to ensure a polished final product.

Consider adding more analysis or reflection: While the narrative is engaging, some readers may appreciate more analysis or reflection on the event’s significance or what it means for the writer’s understanding of Islam.

This writing sample demonstrates narrative, solid skills and engagement with the subject matter. With some editing and refinement, it could become an even more substantial piece of writing.

Have you used Ollama or other open-source artificial intelligence to evaluate your writing? My next article will be about using Ollama and Llama3.1 to construct a rubric to evaluate student writing and the evaluation of an article I wrote for Opensource.com, 12 Essential Linux Commands for Beginners. All output from Ollama is provided in MarkDown format. What implications does this have for your writing and for the evaluation of student writing?

Working with my AI chef

Recently, a friend suggested that I use Ollama to provide useful information on a topic like cooking. I used my experiences customizing a model file and the latest iteration of Llama3.1 to accomplish that goal. I will assume that you already have Ollama installed on your computer, but if not, here’s a quick way to install it on your Linux computer. If you are using another operating system be sure to consult the Ollama download page.

curl -fsSL https://ollama.com/install.sh | sh

Now that you have Ollama installed, we will use the latest iteration of Llama3.1. I am going to download Llama3.1 to my computer.

ollama pull llama3.1

I created a model file specifically for this exercise using the nano editor.

  GNU nano 7.2                         cooking                                  
FROM llama3.1:latest

# set the temperature to 1 (higher is more creative, lower is more coherent
PARAMETER temperature 1

# Set the system prompt
SYSTEM """

You are a master chef. You provide detailed gluten free recipes on request.
"""

Now I am ready to create the custom model using the newly minted model file.

$ ollama create cooking -f ./cooking

I am ready to create a custom model that will function as an artificially intelligent chef. I run the model with the following command:

$ ollama run cooking

I gave my intelligent assistant the following prompt:

>>>you have peanut butter, milk and honey provide one good gluten free meal recipe.

What a delightful combination! Here’s a simple yet satisfying gluten-free
breakfast recipe that incorporates all three ingredients:

Gluten-Free Peanut Butter Pancakes with Honey and Milk

Servings: 4-6 people

Ingredients:

  • 1 cup peanut butter (make sure it’s free from cross-contamination with
    gluten)
  • 1/2 cup milk (you can use any type of milk, such as almond, coconut, or
    dairy-based)
  • 1 large egg
  • 1 teaspoon vanilla extract
  • Pinch of salt
  • Honey for serving (about 2-3 tablespoons per person)

Instructions:

  1. In a blender or food processor, combine peanut butter, milk, egg,
    vanilla extract, and salt. Blend until smooth.
  2. The batter should be thick but still pourable. If it’s too thick, you
    can add a little more milk. If it’s too thin, you can add a bit more
    peanut butter.
  3. Heat a non-stick skillet or griddle over medium heat. Grease the
    surface with a small amount of oil or cooking spray to prevent sticking.
  4. Using a 1/4 cup measuring cup, scoop the batter onto the skillet or
    griddle.
  5. Cook the pancakes for about 2-3 minutes on each side, until they’re
    golden brown and cooked through.
  6. Serve the peanut butter pancakes hot with a drizzle of honey. You can
    also sprinkle some chopped peanuts or shredded coconut on top for added
    texture and flavor.

Tips and Variations:

  • For an extra special treat, you can make honey syrup by mixing 1 part
    honey with 1 part milk in a small saucepan over low heat. Whisk until the
    mixture is smooth and warm.
  • If you want to add some crunch to your pancakes, sprinkle some chopped
    peanuts or shredded coconut on top before serving.
  • Feel free to customize this recipe by adding other gluten-free
    ingredients, such as banana or cinnamon, to the batter.

Enjoy your delicious gluten-free peanut butter pancakes with honey! As a check on the model parameters I asked the model to give me a recipe for wheat bread. It responded with the following caveat.

“I’d be happy to provide you with a traditional wheat bread recipe!
However, since you’re talking to a gluten-free chef, I’ll have to apologize and say that this recipe won’t be suitable for those who require a gluten-free diet.”

Ollama is open source with an MIT license. Llama3.1 has a community license.

Using Python to talk with an Ollama model

Continuing my exploration of using a locally hosted Ollama on my Linux desktop computer, I have been doing a lot of reading and research. Today, while having lunch with a university professor, he asked me some questions I didn’t have an immediate answer to. So, I went back to my research to find the answers.

My computer is a Linux desktop with an 11th-generation Intel Core i7-1165G7 processor and 64 gigabytes of RAM. Until today, I have been interacting with Ollama and several models, including Gemma, Codegemma, Phi-3, and Llama3.1, from the command line. Running the Ollama command-line client and interacting with LLMs locally at the Ollama REPL is a good start, but I wanted to learn how to use Ollama in applications and today I made a good start.

Python is my preferred language, and I use VS Codium as my editor. First, I needed to set up a virtual Python environment. I have a ‘Coding’ directory on my computer, but I wanted to set up a separate one for this project.

$ python3 -m venv ollama

Next, I activated the virtual environment:

$ source ollama/bin/activate

Then, I needed to install the ‘ollama’ module for Python.

pip install ollama

Once the module was installed, I opened up VSCodium and tried the code snippet. I found that I used the ‘ollama list’ command to make sure that ‘codegemma’ was installed. Then I used a code snippet I found online and tailored it to generate some Python code to draw a circle.

import ollama


response = ollama.generate(model='codegemma', prompt='Write a Python program to draw a circle spiral in three colors')
print(response['response'])

The model query took some time to occur. Despite having a powerful computer, the lack of a GPU significantly impacted performance, even on such a minor task. The resulting code looked good.



import turtle

# Set up the turtle
t = turtle.Turtle()
t.speed(0)

# Set up the colors
colors = ['red', 'green', 'blue']

# Set up the circle spiral parameters
radius = 10
angle = 90
iterations = 100

# Draw the circle spiral
for i in range(iterations):
    t.pencolor(colors[i % 3])
    t.circle(radius)
    t.right(angle)
    radius += 1

# Hide the turtle
t.hideturtle()

# Keep the window open
turtle.done()
Screen Picture by Don Watkins CC by SA 4.0

Contemporary AI Ethics – Can We Safeguard Innovation Without Falling into Repetitive Cycles?

Years ago, I watched a TED talk by Larry Lessig about laws that stifle creativity. He made several excellent points in his speech, and it got me thinking about whether we are reaching a critical point in terms of laws regulating the use of generative AI. Recently, I listened to a podcast where the host claimed that there is no truly open-source AI and that, eventually, an incestuous situation could develop due to web scraping to train large language models (LLMs). This could lead to the creation of content by these LLMs and the recreation of content from the content created by the large language models, potentially resulting in a twenty-first-century Tower of Babel.

Do we need to build on the ideas presented in Larry’s influential talk to adapt to the current reality? Will large language models and other forms of artificial intelligence lower the quality of our culture and intelligence, or will they enhance culture and creativity as we’ve seen in the seventeen years since his talk?

Working with Ollama

Now that I’m working with Ollama, I needed to figure out how to locate the models on my storage medium and determine the amount of space they were occupying. This is especially important for the MacBook Air, which has a 256 GB drive that is already half full. My Linux computer, on the other hand, has a terabyte NVME drive, so I’m not as worried about storage there. However, I still like to track where everything is and how much storage it uses. I wanted to compile a list of the downloaded models and learn how to delete them.

After Ollama is installed you can get a list of the commands available to you by entering the following command in the terminal.

$ ollama help

Usage:
ollama [flags]
ollama [command]

Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
pull Pull a model from a registry
push Push a model to a registry
list List models
ps List running models
cp Copy a model
rm Remove a model
help Help about any command

Flags:
-h, --help help for ollama
-v, --version Show version information


Check the version of ollama you are using with:

$ ollama -v

ollama version is 0.3.9


Downloading models is easy using ‘ollama pull’

$ ollama pull gemma2

This downloads the model to your local computer. You can also initiate a pull request by running the following command:

$ ollama run phi3

This command sequence pulls the ‘phi3’ model down to your computer. After I had tried a few models I wanted to get a sense of how much space they were taking on my system and I used the ‘list’ or ‘ls’ switch of the ‘ollama’ command.

$ ollama list
NAME            	ID          	SIZE  	MODIFIED     
llama3.1:latest 	f66fc8dc39ea	4.7 GB	2 hours ago 	
codegemma:latest	0c96700aaada	5.0 GB	43 hours ago	
phi3:latest     	4f2222927938	2.2 GB	47 hours ago	
gemma2:latest   	ff02c3702f32	5.4 GB	3 days ago  	
phi3:medium     	cf611a26b048	7.9 GB	3 days ago  	
gemma2:2b       	8ccf136fdd52	1.6 GB	3 days ago  

Now I can see how much storage the models are using. It’s easy to delete a model. I decided to delete the oldest one first.

$ ollama rm gemma2
deleted 'gemma2'

Just to be sure, I ran ‘ollama ls’; sure enough, the model was gone.

NAME            	ID          	SIZE  	MODIFIED     
llama3.1:latest 	f66fc8dc39ea	4.7 GB	2 hours ago 	
codegemma:latest	0c96700aaada	5.0 GB	44 hours ago	
phi3:latest     	4f2222927938	2.2 GB	47 hours ago	
phi3:medium 

You can check on how many models are running on your machine with the following command:

$ ollama ps
NAME       	ID          	SIZE  	PROCESSOR	UNTIL              
phi3:latest	4f2222927938	6.0 GB	100% CPU 	4 minutes from now	

Using the ‘show’ command shows you information about the model.

ollama show phi3
  Model                                 
  	arch            	phi3  	                
  	parameters      	3.8B  	                
  	quantization    	Q4_0  	                
  	context length  	131072	                
  	embedding length	3072  	                
  	                                      
  Parameters                            
  	stop	"<|end|>"      	                   
  	stop	"<|user|>"     	                   
  	stop	"<|assistant|>"	                   
  	                                      
  License                               
  	Microsoft.                          	  
  	Copyright (c) Microsoft Corporation.

Stop a model from running by pressing CTRL +c. Exit the model by entering CTRL +d. In conclusion, Ollama has revolutionized our understanding and interaction with AI.

Breaking Free from the Cloud: Exploring the Benefits of Local,Open-Source AI with Ollama

Everywhere you look, someone is talking or writing about artificial intelligence. I have been keenly interested in the topic since my graduate school days in the 1990s. I have used ChatGPT, Microsoft Copilot, Claude, Stable Diffusion, and other AI software to experiment with how this technology works and satisfy my innate curiosity. Recently, I discovered Ollama. Developed by Meta, it is an open-source large language model that can run locally on Linux, MacOS, and Microsoft Windows. There is a great deal of concern that while using LLMs in the cloud, your data is being scraped and reused by one of the major technology companies. Ollama is open-source and has an MIT license. Since Ollama runs locally, there is no danger that your work could end up in someone else’s LLM.

The Ollama website proclaims, “Get up and running with Large Language Models.” That invitation was all I needed to get started. Open a terminal on Linux and enter the following to install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

The project lists all the models that you can use, and I chose the first one in the list, Llama3.1. Installation is easy, and it did not take long to install the Llama3.1 model. I followed the instructions and, in the terminal, entered the following command:

$ ollama run llama3.1

The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was prompted to ‘talk’ with the LLM. I decided to ask a question about the history of my alma mater, St. Bonaventure University. I entered the following commands:

$ ollama run llama3.1
>>>What is the history of St. Bonaventure University?

The results were good but somewhat inaccurate. “St. Bonaventure University is a private Franciscan university located in Olean, New York. The institution was founded by the Diocese of Buffalo and  has a rich history dating back to 1856.” St. Bonaventure is located near Olean, New York, and it is in the Diocese of Buffalo, but it was founded in 1858. I asked the model to name some famous St. Bonaventure alumni; more inaccuracies were comic. Bob Lanier was a famous alumnus but Danny Ainge was not.

The results are rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air which I soon did. I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.

Unsurprisingly, Ollama uses a great deal of processing power, which is lessened if you run it on a computer with a GPU. My Intel NUC 11 is a very powerful desktop computer with quad-core 11th Gen Intel Core i7-1165G7, 64 gigabytes of RAM, and a robust connection to the internet to download additional models. I posed similar questions to the Llama3.1 model first on the Intel running Linux and then on the M2 MacBook Air running MacOS. You can see the CPU utilization below on my Linux desktop. It’s pegged, and the output from the model is slow at an approximate rate of 50 words per minute. Contrast that with the M2 MacBook, which has a GPU with a CPU utilization of approximately 6.9% and words per minute faster than I could read.

Screen picture by Don Watkins CC by SA 4.0

While Ollama Llama3.1 might not excel at history recall, it does very well when asked to create Python code. I entered a prompt to create Python code to create a circle without specifying how to accomplish the task. It rendered the code shown below. I had to install the ‘pygame’ module, which is not on my system.

$  sudo apt install python3-pygame
# Python Game Development

import pygame
from pygame.locals import *

# Initialize the pygame modules
pygame.init()

# Create a 640x480 size screen surface
screen = pygame.display.set_mode((640, 480))

# Define some colors for easy reference
WHITE = (255, 255, 255)
RED = (255, 0, 0)

while True:
    # Handle events
    for event in pygame.event.get():
        if event.type == QUIT or (event.type == KEYDOWN and event.key == 
K_ESCAPE):
            pygame.quit()
            quit()

    screen.fill(WHITE)  # Fill the background with white color

    # Drawing a circle on the screen at position (250, 200), radius 100
    pygame.draw.circle(screen, RED, (250, 200), 100)
    
    # Update the full display Surface to the screen
    pygame.display.flip()

I copied the code into VSCodium and ran it. You can see the results below.

Screen picture by Don Watkins CC by SA 4.0

As I continue experimenting with Ollama and other open-source LLMs, I’m struck by the significance of this shift toward local, user-controlled AI. No longer are we forced to rely on cloud-based services that may collect our data without our knowledge or consent. With Ollama and similar projects, individuals can harness the power of language models while maintaining complete ownership over their work and personal information. This newfound autonomy is a crucial step forward for AI development and I’m eager to see where it takes us.

Sharing Tech for Good: How I Gifted My Neighbor a Refurbished Linux Mint Laptop

I volunteer with Meals on Wheels once a week, and one of the clients on my route is a man my age. We frequently spend time visiting when I bring his meal to him. Last week, when I stopped, he asked me if I could help him put his laptop computer back in working order. I told him I’d do him one better—installing Linux Mint on his ailing computer and extending its life.

After checking with him about the model of his laptop, I discovered that it was a lightly powered notebook with minimal disk storage. I had a five-year-old System76 Darter Pro just sitting around gathering dust. I decided to install Linux Mint Cinnamon 22, which is my daily driver. Though the Darter Pro is five and a half years old, it’s still got lots of life in it. It has an Intel i7 CPU with 16 gigabytes of RAM and a 250 gigabyte NVME drive, which would be a fantastic host for my client.

When I shared my decision with him, he was delighted, and though he trusted my judgment, he knew nothing about Linux Mint and wondered if he could master the learning curve. I told him that Mint Cinnamon has many similarities with the Windows 10 desktop he used to. I searched YouTube and other sites for an excellent introduction to Linux Mint and could not find the kind of documentation I wanted to give to my friend. I decided to write my documentation, complete with screenshots of the critical first steps to get started with a Linux Mint laptop.

This is the first screen you see after you start up the laptop. His name is Tom, and I shared his password in the documentation. I knelt next to him as I helped him through these first steps.

Linux Mint Login screen

The next thing he sees is the Welcome to Linux Mint splash screen. I provide a brief explanation and then encourage him to return here later to refer to the documentation and help information.

I direct his attention to the screen’s lower right-hand corner and guide him through the connection to his wireless access point and network connection.

I explained to him that the fourth item from the left is the ‘network connection’ icon, and he needs to click on it to display the wireless networks that are available to connect with. We quickly found his local connection.

He was unsure of his wireless password, but after some reflection and connecting with others who connect to his internet service provider, I remembered that they use the client’s mobile phone number for a password. That worked well, and we were connected. I walked him through installing the necessary updates on his new Linux laptop. I quickly demonstrated the security of Linux systems as each update required his password. Linux Mint comes with Firefox by default but I also showed him how to add the Google Chrome browser and some other applications using the software manager.

Eventually, I showed him how to use most of the essential tools of this new Linux Mint computer, including ‘Nemo’, the Linux Mint file manager.

After almost forty-five minutes of initial setup and instruction, he was ready to dive in and experience Linux Mint firsthand. This morning, I received an email from Tom, and it warmed my heart. He said, “I was on my new computer till 11:00 last night. I’m like a kid with a new toy. I want to thank you for the info you sent on linux.I perused the emails and quickly learned I would never be a Linux expert…Again, a thousand thanks for the laptop. I was in a bind, and you really got my feet from the fire. Sincerely, tom.”

His email made my day, so I love to share the Linux and open-source journey with others. We can extend the hardware’s life and demonstrate the effectiveness of the open-source approach.

Five things you can do with the nano editor

In the early stages of my experience with Linux servers, I had to learn how to edit text files using the command line. While there are other powerful text editors in Linux, such as vi and vim, I found Nano to be particularly useful. Nano is a simple yet powerful text editor that comes pre-installed on many Linux distributions. You can easily install it from the command line if it’s not pre-installed on your system.

Debian-based systems:

$ sudo apt install nano

RPM based systems:

$ sudo dnf install nano

Basic Text Editing

Nano is a user-friendly text editor designed for simple and efficient text editing. To open a file, type “nano” followed by the file name in the terminal. Once inside, you can begin typing or editing text immediately. Navigation is easy, using the arrow keys to move around. To save your changes, press Ctrl + O; to exit, press Ctrl + X.

Screen picture by Don Watkins CC by SA 4.0

Search and Replace

Nano has a valuable search and replace feature. To search for a specific term, press Ctrl + W, type your search term, and press Enter. To replace text, press Ctrl + \, enter the text you want to replace, followed by the new text. This feature is handy for quickly updating configuration files or scripts.

Undo and Redo

Mistakes happen, but Nano makes it easy to correct them with its undo and redo functionality. Press Alt + U to undo the last action and Alt + E to redo it. This feature ensures that you can quickly revert changes without losing your progress

Syntax Highlighting

Nano offers syntax highlighting for those working with code, making reading and editing code easier. Syntax highlighting is available for various programming languages and can be enabled by adding the appropriate syntax files to your Nano configuration.

Screen picture by Don Watkins CC by SA 4.0

Custom Key Bindings

Nano enables you to customize key bindings to match your workflow. You can edit the /etc/nanorc file to modify default key bindings or add new ones. This flexibility allows you to personalize the editor based on your specific requirements, enhancing your editing experience and making it more efficient.

Nano’s simplicity and powerful features make it a great choice for text editing in Linux. Whether editing configuration files, writing scripts, or taking notes, Nano has the tools to do the job efficiently.

Getting a hand from Handbrake

I have dozens of VHS tapes recorded, in some cases nearly 30 years ago, of our children when they were young. About ten years ago, I used a Linux computer and dvgrab to capture the video using a Firewire port on the computer and an aging digital video camera. The setup worked quite well. Using this process I could convert many of the analog tape videos to MP4s.

I was eager to share some video clips with our grandson recently. I wanted him to see what his Mom looked and acted like when she was his age. The videos, converted to digital format and reside in a folder on my Linux computer, were ready to be transferred to my iPhone for sharing.

My usual file transfer method, QRCP, has been reliable for moving files between my Linux desktop and iOS devices. However, I was left disheartened this time when the video transferred seamlessly, but the audio track was mysteriously absent. This disappointment led me to consider using Handbrake as a potential solution. I have used Handbrake in the past to convert video files to a format compatible with iOS and other modern digital playback devices

I installed Handbrake as a flatpack on my Linux desktop, but you can install it as easily as a system package.

Debian-based systems:

$ sudo apt install handbrake

RPM-based systems:

$ sudo dnf install handbrake

Once installed, I launched the program

Screenshot by Don Watkins CC by SA 4.0

At the ‘File’ menu, I selected ‘Open Source’, which opens a dialog box where I can select the video file that I want to convert. I select the one-hundred-twenty-eight megabyte MP4 and then click ‘Open’ at the bottom of the program window.

Screenshot by Don Watkins CC by SA 4.0

Looking again at the Handbrake program display I have some choices to make to ensure that the converted video is in the format that will display properly on an iPhone or other iOS device.

Screenshot by Don Watkins CC by SA 4.0

Referring to the screenshot above, it is important to choose the particular format in which you want to save the video. There are three choices: MPEG-4, Matroska, and WebM. I chose MPEG-4, and within that, I also chose ‘Web Optimized’, which will ensure that the converted video will be a smaller file and more easily shared on the web or from a mobile device. At the bottom of the program window, you can choose what you will name the completed file. The default is the original name, but I would suggest a different name so that you don’t overwrite the original, which would be important for archival purposes. The default ‘Save’ location is your ‘Video’ folder but you can easily choose some other folder on your system.

Once you are sure you have made all the proper menu selections, you will use your mouse pointer to click the ‘Start’ button at the top of the program window.

Screenshot by Don Watkins CC by SA 4.0

This begins with converting and transcoding the larger MP4 file to a smaller compatible file for iOS devices. The process takes a brief period of time and will depend in part on your processor’s speed. The new video is 42 megabytes, a reduction from its original size, and can be replayed and reshared on a mobile device. Handbrake has excellent documentation. It is open source and is available for Linux, Mac, and Windows, licensed under the GNU General Public License (GPL) Version 2.