Neofetch, hosted on the reputable and active project homepage at the Github repository, is designed to create system configuration screenshots on various platforms. The primary difference between Neofetch and ScreenFetch lies in its broader support; it extends beyond Fedora, RHEL, or CentOS and provides compatibility with almost 150 different operating systems, including lesser-known ones like Minix and AIX!
The Neofetch installation procedure is equally straightforward:
Debian and Ubuntu users use the following command:
$ sudo apt install neofetch
For Fedora and other RPM-based distributions use the following command:
$ sudo dnf install neofetch
You can also install neofetch on other operating systems including MacOS.
$ brew install neofetch
Once installed, Neofetch provides a standard system info display that can be further modified for your specific preference: image files, ASCII art, or even wallpaper, to name a few; all these customizations are stored in the .config/neofetch/ directory of the user’s home folder.
As we welcome another academic year, the integration of creative arts within school curriculums remains vital. Among open source resources that empower students to engage with digital soundscapes is Audacity, a free and versatile audio recording and editing software cherished by educators for its simplicity and power in the classroom setting. Audacity’s capacity has only grown, making it an indispensable tool not just today but as we look ahead to 2025 with ever-evolving educational needs:
Podcasting Platform of Choice: Connectivity through Sound Teachers and students alike have adopted Audacity for crafting podcasts, serving an array of purposes from explaining classroom procedures directly within lessons to delivering language learning content. This interactive form has become a cornerstone in modern pedagogy by facilitating out-of-class communication that supplement the traditional teaching experience and offering students additional access points into course material through auditory means, which can enhance comprehension for many learners.
Language Acquisition with Audio Engagement: Learning Languages Through Listening For language education, Audacity has been transformative by providing a platform where foreign language pupils record their spoken lessons and listen to them repeatedly—all within the safety net of open source software that champions accessibility for all students. This feature nurtures self-directed learning as well as peer interaction in multi-language classrooms, setting up an immersive auditory environment akin to real-world conversational scenarios.
Creative Expression Through Sound: Student Audio Projects Evolving with Time and Technology Students’ love for sound extends beyond passive listening; they are creators in their own right using Audacity to produce unique audio projects such as bird songs, oceanic ambient tracks, or even creating custom narrations over chosen background music. This engagement stimulates imagination while providing a practical understanding of digital tools and copyright laws through exploring resources from Creative Commons and Wikimedia sound collections—a learning process that teaches respect for intellectual property alongside technical skills in audio manipulation.
Interviews as Interactive Learning: Engaging with Experts Through Sound Waves Audacity allows students to conduct interviews, integrating them into their educational activities by adding layers of personal experience and expert insight directly through the auditory channel—a method that not only humanizes learning but also bridges generations within a classroom setting as older family members share experiences with younger ones. This formative approach promotes active listening skills while fostering familial bonds, an essential lesson beyond academics alone.
From Capture to Share: Effective Audio File Management for the Modern Classroom Stage and Beyond (2024 Edition) Education today is not just about content but also delivery methodologies—therefore Audacity’s importance as a tool in helping students understand how different audio file formats serve various platforms. From .aup files that facilitate ongoing educational collaboration, to MP3 and WAV for final projects suitable for wider sharing via streaming web servers or digital portfolios, the software prepares young minds not only with technical skills but also industry standards they will encounter in professional spheres such as podcasting careers.
Open Source Software: A Lesson on Rights (2024 Update) and Legacy of Ubuntu’s Free Audio Education Toolkit With its GNU GPLv2 license, Audacity is more than a mere software—it’s an educational journey itself with room for dialogue about copyright laws. This invites students into the world of intellectual property rights discussions that are increasingly relevant in our digital age and offers Linux users straightforward installation processes through standard repositories:
$ sudo apt-get install audacity
or with Fedora
$sudo dnf install audacity
The software continues to stand its ground against the backdrop of continually developing technology with instructions provided for Mac OS X and Windows users ensuring no one is left behind in leveraging this educational powerhouse. Audacity is also available for Linux users as a Flatpak.
According to Wikipedia, “Audacity is the most popular download at FossHub,] with over 114.2 million downloads since March 2015.” Thus as we advance into 2025 and beyond, Audacity remains at the forefront of integrating creativity with digital sound technologies to enrich our classrooms while providing essential open source knowledge sharing that prepares students for a connected world where audio artistry goes hand-in-hand alongside academic excellence.
This article was adapted and rewritten using Ollama and the Phi3.5 model. Text was taken from an article originally published for Opensource.com in 2016.
Pandoc is a versatile command-line tool that facilitates seamless file conversions between different markup formats. It supports an extensive range of input and output formats, making it indispensable for writers, researchers, and developers. I have found it particularly useful when converting output from LLMs to to HTML or more common word processing formats.
Pandoc’s strength lies in its support for various input formats, including Markdown, HTML, LaTeX, Open Document, and Microsoft Word. It can convert those documents to PDF, HTML, EPUB, and even PowerPoint presentations. This flexibility makes Pandoc an invaluable tool for individuals working with documents across different platforms and tools.
Here are some specific examples that may fit your use case.
1. Converting Markdown to HTML:
Markdown, known for its simplicity and readability, is widely used for creating content for the web. With Pandoc, you can effortlessly convert Markdown files to HTML, enabling seamless web content publishing. For instance, the following command can be used to convert a Markdown file named “example. md” to HTML:
$ pandoc example.md -o example.html
2. Generating PDF from LaTeX:
LaTeX, renowned for its powerful typesetting capabilities, is favored for academic and technical documents. Pandoc seamlessly converts LaTeX files to PDF, producing high-quality documents suitable for printing or digital distribution. Consider the following command to convert a LaTeX file named “paper.tex” to PDF:
$ pandoc paper.tex -o paper.pdf
3. Transforming Word documents to Markdown:
Many writers and researchers prefer working with Markdown due to its simplicity and portability. With Pandoc, you can convert Microsoft Word documents to Markdown, allowing editing and collaboration using lightweight, text-based tools. Use the following command to convert a Word document named “report.docx” to Markdown:
$ pandoc report.docx -o report.md
4. Creating EPUB from HTML:
EPUB, a popular e-book format compatible with a wide range of e-readers and mobile devices, is a common choice for digital content distribution. If you have content in HTML format, Pandoc can assist in converting it to EPUB for convenient distribution and reading. Here’s an example command to convert an HTML file named “book.html” to EPUB:
$ pandoc book.html -o book.epub
5. Convert Markdown file to a PowerPoint presentation using Pandoc, you can use the command
$ pandoc myslides.md -o myslides.pptx
You can open the resulting .pptx file in PowerPoint
In addition to these examples, Pandoc offers extensive customization options for fine-tuning the output of document conversions. Users can specify styling, metadata, and other parameters to ensure the converted files meet their specific requirements.
In conclusion, Pandoc stands as a robust and versatile tool for document conversion, offering support for a wide array of input and output formats. Pandoc can help streamline your workflow and enhance your document management capabilities, whether you’re a writer, researcher, or developer.
I have been experimenting a lot with Ollama and other artificial intelligence tools and the anwers to my prompts always are rendered in Markdown. I have Marktext on my Linux computer and MacDown on my MacBook Air so I can easily copy and paste the output into either of those editors and save it as a Markdown file on my computer. However, when I want to share those files with colleagues who are unfamiliar with Markdown I need a way to convert those files into a format that’s easily accessible for them. My Markdown editors can only export the Markdown files as HTML or PDF.
That problem is easily solved with Pandoc which is a great tool that anyone can install on Linux, MacOS or Windows that easily converts Markdown into any number of different formats. Easily install Pandoc on Linux with the following commands:
On MacOS use Homebrew to easily install Pandoc by opening a terminal and entering the following command.
$ brew install pandoc
You can install Pandoc on Microsoft Windows using Chocolatey with the following command:
choco install pandoc
Once the application is installed it works very well from the command line. The project mainains great documentation. I needed to convert a Markdown document to .docx so my Windows using colleagues could easily read the output from the LLM I had been using. I used the documentation and the programs man page on my Linux system to enter the following command.
The conversion was flawless and occured in a second or two. Your experience may vary based on CPU, RAM and length of the document converted. The Markdown file of our lease review was converted to “ProposedLease.docx” which I could easily share with my colleagues who were using Microsoft Word.
If you are a person who is uncomfortable installing software on your computer or you are an unpriviledged user in a corporate or educational setting you can use Pandoc online. Pandooc is open source with a GPL v2 license. For more information about Pandoc be sure to consult their website and examine all of the options that the program provides you.
Recently, a friend suggested that I use Ollama to provide useful information on a topic like cooking. I used my experiences customizing a model file and the latest iteration of Llama3.1 to accomplish that goal. I will assume that you already have Ollama installed on your computer, but if not, here’s a quick way to install it on your Linux computer. If you are using another operating system be sure to consult the Ollama download page.
curl -fsSL https://ollama.com/install.sh | sh
Now that you have Ollama installed, we will use the latest iteration of Llama3.1. I am going to download Llama3.1 to my computer.
ollama pull llama3.1
I created a model file specifically for this exercise using the nano editor.
GNU nano 7.2 cooking
FROM llama3.1:latest
# set the temperature to 1 (higher is more creative, lower is more coherent
PARAMETER temperature 1
# Set the system prompt
SYSTEM """
You are a master chef. You provide detailed gluten free recipes on request.
"""
Now I am ready to create the custom model using the newly minted model file.
$ ollama create cooking -f ./cooking
I am ready to create a custom model that will function as an artificially intelligent chef. I run the model with the following command:
$ ollama run cooking
I gave my intelligent assistant the following prompt:
>>>you have peanut butter, milk and honey provide one good gluten free meal recipe.
What a delightful combination! Here’s a simple yet satisfying gluten-free breakfast recipe that incorporates all three ingredients:
Gluten-Free Peanut Butter Pancakes with Honey and Milk
Servings: 4-6 people
Ingredients:
1 cup peanut butter (make sure it’s free from cross-contamination with gluten)
1/2 cup milk (you can use any type of milk, such as almond, coconut, or dairy-based)
1 large egg
1 teaspoon vanilla extract
Pinch of salt
Honey for serving (about 2-3 tablespoons per person)
Instructions:
In a blender or food processor, combine peanut butter, milk, egg, vanilla extract, and salt. Blend until smooth.
The batter should be thick but still pourable. If it’s too thick, you can add a little more milk. If it’s too thin, you can add a bit more peanut butter.
Heat a non-stick skillet or griddle over medium heat. Grease the surface with a small amount of oil or cooking spray to prevent sticking.
Using a 1/4 cup measuring cup, scoop the batter onto the skillet or griddle.
Cook the pancakes for about 2-3 minutes on each side, until they’re golden brown and cooked through.
Serve the peanut butter pancakes hot with a drizzle of honey. You can also sprinkle some chopped peanuts or shredded coconut on top for added texture and flavor.
Tips and Variations:
For an extra special treat, you can make honey syrup by mixing 1 part honey with 1 part milk in a small saucepan over low heat. Whisk until the mixture is smooth and warm.
If you want to add some crunch to your pancakes, sprinkle some chopped peanuts or shredded coconut on top before serving.
Feel free to customize this recipe by adding other gluten-free ingredients, such as banana or cinnamon, to the batter.
Enjoy your delicious gluten-free peanut butter pancakes with honey! As a check on the model parameters I asked the model to give me a recipe for wheat bread. It responded with the following caveat.
“I’d be happy to provide you with a traditional wheat bread recipe! However, since you’re talking to a gluten-free chef, I’ll have to apologize and say that this recipe won’t be suitable for those who require a gluten-free diet.”
Ollama is open source with an MIT license. Llama3.1 has a community license.
Continuing my exploration of using a locally hosted Ollama on my Linux desktop computer, I have been doing a lot of reading and research. Today, while having lunch with a university professor, he asked me some questions I didn’t have an immediate answer to. So, I went back to my research to find the answers.
My computer is a Linux desktop with an 11th-generation Intel Core i7-1165G7 processor and 64 gigabytes of RAM. Until today, I have been interacting with Ollama and several models, including Gemma, Codegemma, Phi-3, and Llama3.1, from the command line. Running the Ollama command-line client and interacting with LLMs locally at the Ollama REPL is a good start, but I wanted to learn how to use Ollama in applications and today I made a good start.
Python is my preferred language, and I use VS Codium as my editor. First, I needed to set up a virtual Python environment. I have a ‘Coding’ directory on my computer, but I wanted to set up a separate one for this project.
$ python3 -m venv ollama
Next, I activated the virtual environment:
$ source ollama/bin/activate
Then, I needed to install the ‘ollama’ module for Python.
pip install ollama
Once the module was installed, I opened up VSCodium and tried the code snippet. I found that I used the ‘ollama list’ command to make sure that ‘codegemma’ was installed. Then I used a code snippet I found online and tailored it to generate some Python code to draw a circle.
import ollama
response = ollama.generate(model='codegemma', prompt='Write a Python program to draw a circle spiral in three colors')
print(response['response'])
The model query took some time to occur. Despite having a powerful computer, the lack of a GPU significantly impacted performance, even on such a minor task. The resulting code looked good.
import turtle
# Set up the turtle
t = turtle.Turtle()
t.speed(0)
# Set up the colors
colors = ['red', 'green', 'blue']
# Set up the circle spiral parameters
radius = 10
angle = 90
iterations = 100
# Draw the circle spiral
for i in range(iterations):
t.pencolor(colors[i % 3])
t.circle(radius)
t.right(angle)
radius += 1
# Hide the turtle
t.hideturtle()
# Keep the window open
turtle.done()
Years ago, I watched a TED talk by Larry Lessig about laws that stifle creativity. He made several excellent points in his speech, and it got me thinking about whether we are reaching a critical point in terms of laws regulating the use of generative AI. Recently, I listened to a podcast where the host claimed that there is no truly open-source AI and that, eventually, an incestuous situation could develop due to web scraping to train large language models (LLMs). This could lead to the creation of content by these LLMs and the recreation of content from the content created by the large language models, potentially resulting in a twenty-first-century Tower of Babel.
Do we need to build on the ideas presented in Larry’s influential talk to adapt to the current reality? Will large language models and other forms of artificial intelligence lower the quality of our culture and intelligence, or will they enhance culture and creativity as we’ve seen in the seventeen years since his talk?
Everywhere you look, someone is talking or writing about artificial intelligence. I have been keenly interested in the topic since my graduate school days in the 1990s. I have used ChatGPT, Microsoft Copilot, Claude, Stable Diffusion, and other AI software to experiment with how this technology works and satisfy my innate curiosity. Recently, I discovered Ollama. Developed by Meta, it is an open-source large language model that can run locally on Linux, MacOS, and Microsoft Windows. There is a great deal of concern that while using LLMs in the cloud, your data is being scraped and reused by one of the major technology companies. Ollama is open-source and has an MIT license. Since Ollama runs locally, there is no danger that your work could end up in someone else’s LLM.
The Ollama website proclaims, “Get up and running with Large Language Models.” That invitation was all I needed to get started. Open a terminal on Linux and enter the following to install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
The project lists all the models that you can use, and I chose the first one in the list, Llama3.1. Installation is easy, and it did not take long to install the Llama3.1 model. I followed the instructions and, in the terminal, entered the following command:
$ ollama run llama3.1
The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was prompted to ‘talk’ with the LLM. I decided to ask a question about the history of my alma mater, St. Bonaventure University. I entered the following commands:
$ ollama run llama3.1
>>>What is the history of St. Bonaventure University?
The results were good but somewhat inaccurate. “St. Bonaventure University is a private Franciscan university located in Olean, New York. The institution was founded by the Diocese of Buffalo and has a rich history dating back to 1856.” St. Bonaventure is located near Olean, New York, and it is in the Diocese of Buffalo, but it was founded in 1858. I asked the model to name some famous St. Bonaventure alumni; more inaccuracies were comic. Bob Lanier was a famous alumnus but Danny Ainge was not.
The results are rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air which I soon did. I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.
Unsurprisingly, Ollama uses a great deal of processing power, which is lessened if you run it on a computer with a GPU. My Intel NUC 11 is a very powerful desktop computer with quad-core 11th Gen Intel Core i7-1165G7, 64 gigabytes of RAM, and a robust connection to the internet to download additional models. I posed similar questions to the Llama3.1 model first on the Intel running Linux and then on the M2 MacBook Air running MacOS. You can see the CPU utilization below on my Linux desktop. It’s pegged, and the output from the model is slow at an approximate rate of 50 words per minute. Contrast that with the M2 MacBook, which has a GPU with a CPU utilization of approximately 6.9% and words per minute faster than I could read.
While Ollama Llama3.1 might not excel at history recall, it does very well when asked to create Python code. I entered a prompt to create Python code to create a circle without specifying how to accomplish the task. It rendered the code shown below. I had to install the ‘pygame’ module, which is not on my system.
$ sudo apt install python3-pygame
# Python Game Development
import pygame
from pygame.locals import *
# Initialize the pygame modules
pygame.init()
# Create a 640x480 size screen surface
screen = pygame.display.set_mode((640, 480))
# Define some colors for easy reference
WHITE = (255, 255, 255)
RED = (255, 0, 0)
while True:
# Handle events
for event in pygame.event.get():
if event.type == QUIT or (event.type == KEYDOWN and event.key ==
K_ESCAPE):
pygame.quit()
quit()
screen.fill(WHITE) # Fill the background with white color
# Drawing a circle on the screen at position (250, 200), radius 100
pygame.draw.circle(screen, RED, (250, 200), 100)
# Update the full display Surface to the screen
pygame.display.flip()
I copied the code into VSCodium and ran it. You can see the results below.
As I continue experimenting with Ollama and other open-source LLMs, I’m struck by the significance of this shift toward local, user-controlled AI. No longer are we forced to rely on cloud-based services that may collect our data without our knowledge or consent. With Ollama and similar projects, individuals can harness the power of language models while maintaining complete ownership over their work and personal information. This newfound autonomy is a crucial step forward for AI development and I’m eager to see where it takes us.
I have dozens of VHS tapes recorded, in some cases nearly 30 years ago, of our children when they were young. About ten years ago, I used a Linux computer and dvgrab to capture the video using a Firewire port on the computer and an aging digital video camera. The setup worked quite well. Using this process I could convert many of the analog tape videos to MP4s.
I was eager to share some video clips with our grandson recently. I wanted him to see what his Mom looked and acted like when she was his age. The videos, converted to digital format and reside in a folder on my Linux computer, were ready to be transferred to my iPhone for sharing.
My usual file transfer method, QRCP, has been reliable for moving files between my Linux desktop and iOS devices. However, I was left disheartened this time when the video transferred seamlessly, but the audio track was mysteriously absent. This disappointment led me to consider using Handbrake as a potential solution. I have used Handbrake in the past to convert video files to a format compatible with iOS and other modern digital playback devices
I installed Handbrake as a flatpack on my Linux desktop, but you can install it as easily as a system package.
Debian-based systems:
$ sudo apt install handbrake
RPM-based systems:
$ sudo dnf install handbrake
Once installed, I launched the program
At the ‘File’ menu, I selected ‘Open Source’, which opens a dialog box where I can select the video file that I want to convert. I select the one-hundred-twenty-eight megabyte MP4 and then click ‘Open’ at the bottom of the program window.
Looking again at the Handbrake program display I have some choices to make to ensure that the converted video is in the format that will display properly on an iPhone or other iOS device.
Referring to the screenshot above, it is important to choose the particular format in which you want to save the video. There are three choices: MPEG-4, Matroska, and WebM. I chose MPEG-4, and within that, I also chose ‘Web Optimized’, which will ensure that the converted video will be a smaller file and more easily shared on the web or from a mobile device. At the bottom of the program window, you can choose what you will name the completed file. The default is the original name, but I would suggest a different name so that you don’t overwrite the original, which would be important for archival purposes. The default ‘Save’ location is your ‘Video’ folder but you can easily choose some other folder on your system.
Once you are sure you have made all the proper menu selections, you will use your mouse pointer to click the ‘Start’ button at the top of the program window.
This begins with converting and transcoding the larger MP4 file to a smaller compatible file for iOS devices. The process takes a brief period of time and will depend in part on your processor’s speed. The new video is 42 megabytes, a reduction from its original size, and can be replayed and reshared on a mobile device. Handbrake has excellent documentation. It is open source and is available for Linux, Mac, and Windows, licensed under the GNU General Public License (GPL) Version 2.
It’s been a few years since I purchased a MacBook. My last Mac was a MacBook Pro I bought in the spring of 2020. Since then, I’ve been using Linux exclusively. My desktop is an Intel NUC 11 that’s running Linux Mint Cinnamon, and I’ve no plans to change that anytime soon. However, I’ve heard lots of good reviews of Apple Silicon. I experimented with a MacMini with the M1 chip a bit over a year ago but sent it back and purchased an HP DevOne, which I had docked for just about a year.
When I upgraded to the NUC 11, the DevOne became an extra laptop. I’ve been using it since August in that capacity. Last month I took it to All Things Open and used it for note-taking, writing, and tooting. I was disappointed in its battery life and the 14-inch display was not enough for a guy who’s used to more desktop real estate.
I was attracted to the MacBook Air M2’s 15.3-inch display. My eyes aren’t what they used to be, and I need bigger fonts on a bigger display. I read many reviews and visited the Apple Store nearby to inspect this new Mac. I was impressed and almost purchased a unit that day. I decided to walk around the mall and left without purchasing the MacBook. More positive reviews and commentary from some of the open-source podcasts I listen to. That led me to purchase this unit on a ‘Black Friday’ deal from Amazon. The MacBook Air arrived today and got it configured the way I wanted to. I installed the latest Python from Python.org and Visual Studio Code .
I wanted to ensure that I could use this new laptop to continue to hone my Python skills.
I used HomeBrew to install some of my other favorite open-source apps which included GnuCash, MacDown, and Joplin. I’m not doing any heavy lifting with this laptop but I was attracted by its reported long battery life. This MacBook Air M2 came with 256 GB SSD and 8 GB RAM. I like the feel of the keyboard and the overall performance and build quality. There are no readily apparent downsides to this new purchase.