Unleashing the Future: BlueSky and Its Open-Source Vision for Social Networking

Social networks have proliferated and they are at the center of much of our modern discourse. They can provide a level of sharing and learning that is not easily matched. Their asynchronous communication means that users do not have to be online at the same time. I joined Twitter in 2008 at the behest of the NYSCATE conference where I was presenting that year. I used it sparingly at first not knowing exactly how or what to share but over time I became a prolific user of the platform and the Twitterverse became integral to my pursuit of lifelong learning. Later when I started writing for Opensource.com it became a way to share our article for a wider audience.

That experience worked quite well for a while but we learned at a community conference in 2018 that what we were sharing wasn’t getting the traction it used to get as the medium was becoming increasingly controlled by algorithms that didn’t allow us to share our posts as we had. I don’t give up easily but in 2022 I began to move away from the Twitterverse and begin to explore Mastodon. It was a new gig and it took some time to learn how to share and find others who were open source nerds like me.

Then a new star emerged in the social media constellation when public signups for BlueSky in February 2024. BlueSky was launched by Twitter co-founder Jack Dorsey. BlueSky embraces decentralization, giving users more control over their data and interactions. This approach fosters greater privcacy and security while encouraging a diverse and vibrant community. BlueSky is deeply rooted in open source principles. The platform runs on the AT protocol which is open source and allows for decentralized social networking. The AT protocol does not connect with ActivityPub which is the protocol that Mastodon and the rest of the Fediverse runs on. BlueSky users can post three hundred character ‘skeets’ which are what user shared sare called. Mastodon users are used to five hundred character shares which are called ‘toots.’

Regardless of whether you are skeeting or tooting there is a great deal to celebrate with BlueSky. I signed up yesterday and it’s been great. The signup process is easy and there is a friendly algorithm to based on the people you follow and the ‘skeets’ you like. I have found it easy to connect with open source folks I have followed on the other platforms too. Though the platform is decentralized I have found following folks I know was much easier than follwing Mastdon users. According to ZDNet BlueSky currently has about 16.7 million users.

My primary interests are in Linux, open source and education. I found it easy to connect with folks in those areas and to easily connect and share what I’m learning. You can follow the Linux FoundationFedora LinuxLinux Mint, David BothAll Things Open. You can follow me on BlueSky. Happy skeeting!

How to Easily Remove Metadata from Photos with ExifCleaner

In today’s world, nearly everyone owns a digital camera. Cameras are now integral components of smartphones and laptops. If you use consumer electronics, you likely have access to a digital camera.

As a result, billions of digital images are available on the Internet from various devices and sources. Each image taken with a digital camera includes Exchangeable Image File Format (Exif) metadata, which provides information about when and where the picture was taken, the camera used, file size, MIME type, color space, and much more.

Every picture you capture with a digital camera contains numerous tags with a wealth of information, some of which may be considered confidential. Major social media platforms claim to remove this metadata to protect users from cybercrime; however, this is not true for individuals who run their blogs or wikis and post images of family gatherings or classrooms. Someone could download an image from these sites and potentially access sensitive personal information stored in the metadata.

Viewing Exif Data

How can you check what metadata is included in the images you share, and how can you remove it? I recently discovered an open-source project called ExifCleaner. ExifCleaner is a cross-platform tool that easily removes all Exif metadata from images, videos, PDFs, and other file types.

Installing EXIFCleaner

ExifCleaner is released under the MIT license and is easy to use and install. Download and install the AppImage, deb or rpm file for your distribution. MacOS and Windows downloads are available too. If you are a Homebrew user you can install Exifcleaner with the following command:

brew install --cask exifcleaner

Exifcleaner was built for Intel MacOS exifcleaner and so requires Rosetta 2 to be installed.

Use ExifCleaner

Once installed, launch the graphical application. You can drag and drop an image into the window or use the “Open” option from the File menu to load an image. Multiple images can be loaded at once.

Screen picture by Don Watkins CC by SA 4.0
Screen picture by Don Watkins CC by SA 4.0

Once loaded, ExifCleaner clears all metadata instantly. There’s no further action required, but there’s also no confirmation or warning. Only open files in ExifCleaner that you want to scrub metadata from.ExifCleaner can process dozens of file types, including JPG, 3G2, 3GP2, AAX, CR2, MOV, PDF, PNG, etc.

Try ExifCleaner

ExifCleaner is available in twenty-four languages and boasts a large development community. If you’re interested in contributing to the project’s development, contact the team and check out the source code. To learn more about ExifCleaner, visit the official website.



The Perfect Open Source Solution for Creating Stunning Photo Collages on Linux

Open source software is amazing and it seems like there are always solutions that come in handy when I look hard enough. Today was one of those days when a internet search for a query I read on Mastodon netted another open source solution. Mastodon user, Bryan Mitchell asked “Are there any photo editing software programs out there where you can put a set of photos into a collage?” Bryan had used Google’s Picasa formerly but that is no longer available. A quick search revealed an open source project called Photocollage. It is an easy to use software package written in Python with a GPL 2.0 license. It was easy for me to find the correct install script for Linux Mint.

$ sudo apt install photocollage

You can also install Photocollage on RPM based distributions by using the following command:

$ sudo dnf install photocollage

You can elect to install it with Python by using the following command.

sudo pip3 install photocollage

It can be launched from the command line or in my case from the “Graphics” submenu of my Cinnamon desktop. Click on the ‘Add images..’ button and start adding pictures to your collage.

Screen Picture by Don Watkins CC by SA 4.0

In short order I added a number of pictures from a directory on my computer. There is a button to ‘Save poster’, two buttons in the middle of the panel to ‘go back’ or ‘go forward” and one more at the end to ‘Regenerate’ the collage.’

Screen Shot by Don Watkins CC by SA 4.0

The last button on the panel is for ‘Settings’ to allow you to choose different picture sizes and templates. You can select border thickness and background color.

Screen picture by Don Watkins CC by SA 4.0

This software is easy to use and the results a very good. Images can be saved in ten different formats including: bit map paint, jpeg, gif, Microsoft Paint, pcx, png, tga, tiff, WebP and xbm. The software integrates with GNOME and is available in English, French, German, Czech, Italian, Bulgarian, Dutch, Russian, Spanish, Polish and Ukrainian. Lead developer is Adrien Verge and there are over twenty other contributors.

Open WebUI: A nuanced approach to locally hosted Ollama

Open WebUI offers a robust, feature-packed, and intuitive self-hosted interface that operates seamlessly offline. It supports  various large language models like Ollama and OpenAI-compatible APIs,  Open WebUI is open source with an MIT license. It is easy to download and install and it has excellent documentation. I chose to install it on both my Linux computer and on the M2 MacBook Air. The software is written in Svelte, Python, and TypeScript and has a community of over two-hundred thirty developers working on it.

The documentation states that one of its key features is effortless setup. It was a easy to install. I chose to use the Docker. It boasts a number of other great features including OpenAI API integration, full Markdown and Latex support, a model builder to easily create Ollama models within the application. Be sure to check the documentation for all nuances of this amazing software.

I decided to install Open WebUI with bundled Ollam support for CPU only since my Linux computer does not have a GPU. This container Image unites the power of both Open-WebUI and Ollama for an effortless setup. I used the following Docker install script copied from the Github repository.

$ docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

On the MacBook I chose a slightly different install opting to use the existing Ollama install as I wanted to conserve space on the smaller host drive. I used the following command taken from the Github repository.

% docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Once the install was complete I pointed my browser to:

http://localhost:3000/auth

I was presented with a login page and asked to supply my email and a password.

Screen picture by Don Watkins CC by SA 4.0

First time login will use the ‘Sign up’ link and provide your name, email and a password. Subsequent logins require email and password. After logging in the first time you are presented with “What’s New” about the project and software.

After pressing “Okay, Let’s Go!” I am presented with this display.

Screen picture by Don Watkins CC by SA 4.0

Now I am ready to start using Open WebUI.

The first thing you want to do is ‘Select a Model’ at the top left of the display. You can search for models that are available from the Ollama project.

Screen picture by Don Watkins CC by SA 4.0

On initial install you will need to download a model from Ollama.com. I enter the model name I want in the search window and press ‘Enter.’ The software downloads the model to my computer.

Screen picture by Don Watkins CC by SA 4.0

Now that the model is downloaded and verified I am ready to begin using Open WebUI with my locally hosted Phi3.5 model. Other models can be downloaded and easily installed as well. Be sure to consult the excellent getting started guide and have fun using this feature rich interface. The project has some tutorials to assist new users. In conclusion, launching into an immersive experience through their intuitively designed interface allows users of Open-WebUI to fully leverage its comprehensive array of features.

Discovering New Passions: Writing, Linux, and Sharing Open Source Stories

Our children gifted with a subscription to Storyworth for Father’s Day this year and each week a new writing prompt arrives in my email inbox. This week the prompt was what are some hobbies you have pursued or want to pursue in your retirement. It took me a while to think about that topic. I am not a guy to put together model planes and I don’t have a train set. I don’t play golf.

I walk, tinker with computers and write. I didn’t think of writing as a hobby until this week and maybe it’s not exaclty a hobby in the traditional sense but it’s a way for me to share my thoughts and journey with the wider world. I have been blogging frequently since early 2006 and have written over nineteen hundred articles for my own blog. In addition I have written hundreds of articles that have been published on a variety of sites including Both.org where I am a regular contributor. I also write for Allthingsopen.org and TechnicallyWeWrite.com.

I have created most of my content this year for Both.org, where we focus on Linux and open source. We are seeking individuals who would like to share their Linux and open source journey with our audience. Our website has been attracting more and more visitors. If you have an open source story to share, we encourage you to join us. Later this month, I’ll be traveling to Raleigh, NC to attend All Things Open. This will be my tenth ATO, and I am excited to learn from the people I will meet.

Write for us! We have a style sheet with guidelines and we’d love for you to share your open source journey with us.

Empowering Creators with Open Source Software

As we welcome another academic year, the integration of creative arts within school curriculums remains vital. Among open source resources that empower students to engage with digital soundscapes is Audacity, a free and versatile audio recording and editing software cherished by educators for its simplicity and power in the classroom setting. Audacity’s capacity has only grown, making it an indispensable tool not just today but as we look ahead to 2025 with ever-evolving educational needs:

Podcasting Platform of Choice: Connectivity through Sound
Teachers and students alike have adopted Audacity for crafting podcasts, serving an array of purposes from explaining classroom procedures directly within lessons to delivering language learning content. This interactive form has become a cornerstone in modern pedagogy by facilitating out-of-class communication that supplement the traditional teaching experience and offering students additional access points into course material through auditory means, which can enhance comprehension for many learners.

Language Acquisition with Audio Engagement: Learning Languages Through Listening
For language education, Audacity has been transformative by providing a platform where foreign language pupils record their spoken lessons and listen to them repeatedly—all within the safety net of open source software that champions accessibility for all students. This feature nurtures self-directed learning as well as peer interaction in multi-language classrooms, setting up an immersive auditory environment akin to real-world conversational scenarios.

Creative Expression Through Sound: Student Audio Projects Evolving with Time and Technology
Students’ love for sound extends beyond passive listening; they are creators in their own right using Audacity to produce unique audio projects such as bird songs, oceanic ambient tracks, or even creating custom narrations over chosen background music. This engagement stimulates imagination while providing a practical understanding of digital tools and copyright laws through exploring resources from Creative Commons and Wikimedia sound collections—a learning process that teaches respect for intellectual property alongside technical skills in audio manipulation.

Interviews as Interactive Learning: Engaging with Experts Through Sound Waves
Audacity allows students to conduct interviews, integrating them into their educational activities by adding layers of personal experience and expert insight directly through the auditory channel—a method that not only humanizes learning but also bridges generations within a classroom setting as older family members share experiences with younger ones. This formative approach promotes active listening skills while fostering familial bonds, an essential lesson beyond academics alone.

From Capture to Share: Effective Audio File Management for the Modern Classroom Stage and Beyond (2024 Edition)
Education today is not just about content but also delivery methodologies—therefore Audacity’s importance as a tool in helping students understand how different audio file formats serve various platforms. From .aup files that facilitate ongoing educational collaboration, to MP3 and WAV for final projects suitable for wider sharing via streaming web servers or digital portfolios, the software prepares young minds not only with technical skills but also industry standards they will encounter in professional spheres such as podcasting careers.

Open Source Software: A Lesson on Rights (2024 Update) and Legacy of Ubuntu’s Free Audio Education Toolkit
With its GNU GPLv2 license, Audacity is more than a mere software—it’s an educational journey itself with room for dialogue about copyright laws. This invites students into the world of intellectual property rights discussions that are increasingly relevant in our digital age and offers Linux users straightforward installation processes through standard repositories:

$ sudo apt-get install audacity
or with Fedora
$sudo dnf install audacity 

The software continues to stand its ground against the backdrop of continually developing technology with instructions provided for Mac OS X and Windows users ensuring no one is left behind in leveraging this educational powerhouse. Audacity is also available for Linux users as a Flatpak.

According to Wikipedia, “Audacity is the most popular download at FossHub,] with over 114.2 million downloads since March 2015.” Thus as we advance into 2025 and beyond, Audacity remains at the forefront of integrating creativity with digital sound technologies to enrich our classrooms while providing essential open source knowledge sharing that prepares students for a connected world where audio artistry goes hand-in-hand alongside academic excellence.

This article was adapted and rewritten using Ollama and the Phi3.5 model. Text was taken from an article originally published for Opensource.com in 2016.

Taking a look at financial data with Ollama

Several weeks ago a person asked me to assist her with organizing her financial records to take them to a tax professional. This person does not use a financial program like GnuCash which could make that project much easier. Instead we downloaded a csv file from her bank and then she used Microsoft Excel to add a category to each expense. This was a tedious process. I used a pivot table to further organize and display her data which she took to the tax preparer.

Recently while working on other projects with Ollama I wondered if it might be possible to use a local large language model to accomplish the same task. It is easy to download Ollama. If you are a LInux user like I am you can enter the following command in a terminal.

curl -fsSL https://ollama.com/install.sh | sh

I experimented with phi3.5 and Llama3.2 and found the latter to work better for me. It is easy to pull the model down to your computer with the following command:

$ ollama pull Llama3.2

Once the model was downloaded to my computer I wanted to make my own custom model to analyze my financial data set which was a csv file from the bank. I created a model file which I called financial using nano. Here is the text of the modelfile I created for this activity:

FROM llama3.2

# set the temperature to 1 [higher is more creative, lower is more coherent]

PARAMETER temperature .6

# set the system message

SYSTEM “””

You are a financial analyst. Analyze the financial information I supply.

“””

I used the model file to to create the custom model for this financial analysis. I set the temperature PARAMETER to .6 to make the work more accurate. I entered the following command in the terminal:

$ ollama create financial -f financial

This created the unique LLM based on Llama3.2 to perform the financial analysis. I made sure that the csv file from my financial institution was in the same directory as I was currently operating. This is important and then entered the following command to pull the csv file into the custom LLM.

ollama run financial:latest "$(cat data.csv)", Summarize my transactions. 

This gave me a complete summary of the debits and credits that were included in the small csv file. I have encountered some errors and I plan to keep working with the model and reading. I’m encoueraged by the results.

Pandoc: The Ultimate Document Conversion Tool

Pandoc is a versatile command-line tool that facilitates seamless file conversions between different markup formats. It supports an extensive range of input and output formats, making it indispensable for writers, researchers, and developers. I have found it particularly useful when converting output from LLMs to to HTML or more common word processing formats.

Pandoc’s strength lies in its support for various input formats, including Markdown, HTML, LaTeX, Open Document, and Microsoft Word. It can convert those documents to PDF, HTML, EPUB, and even PowerPoint presentations. This flexibility makes Pandoc an invaluable tool for individuals working with documents across different platforms and tools.

Here are some specific examples that may fit your use case.

1. Converting Markdown to HTML:

Markdown, known for its simplicity and readability, is widely used for creating content for the web. With Pandoc, you can effortlessly convert Markdown files to HTML, enabling seamless web content publishing. For instance, the following command can be used to convert a Markdown file named “example. md” to HTML:

$ pandoc example.md -o example.html

2. Generating PDF from LaTeX:

LaTeX, renowned for its powerful typesetting capabilities, is favored for academic and technical documents. Pandoc seamlessly converts LaTeX files to PDF, producing high-quality documents suitable for printing or digital distribution. Consider the following command to convert a LaTeX file named “paper.tex” to PDF:

$ pandoc paper.tex -o paper.pdf

3. Transforming Word documents to Markdown:

Many writers and researchers prefer working with Markdown due to its simplicity and portability. With Pandoc, you can convert Microsoft Word documents to Markdown, allowing editing and collaboration using lightweight, text-based tools. Use the following command to convert a Word document named “report.docx” to Markdown:

$ pandoc report.docx -o report.md

4. Creating EPUB from HTML:

EPUB, a popular e-book format compatible with a wide range of e-readers and mobile devices, is a common choice for digital content distribution. If you have content in HTML format, Pandoc can assist in converting it to EPUB for convenient distribution and reading. Here’s an example command to convert an HTML file named “book.html” to EPUB:

$ pandoc book.html -o book.epub

5. Convert Markdown file to a PowerPoint presentation using Pandoc, you can use the command

$ pandoc myslides.md -o myslides.pptx

You can open the resulting .pptx file in PowerPoint

In addition to these examples, Pandoc offers extensive customization options for fine-tuning the output of document conversions. Users can specify styling, metadata, and other parameters to ensure the converted files meet their specific requirements.

In conclusion, Pandoc stands as a robust and versatile tool for document conversion, offering support for a wide array of input and output formats. Pandoc can help streamline your workflow and enhance your document management capabilities, whether you’re a writer, researcher, or developer.

Seamlessly Convert Your Markdown Docs to Office Formats Using Pandoc

I have been experimenting a lot with Ollama and other artificial intelligence tools and the anwers to my prompts always are rendered in Markdown. I have Marktext on my Linux computer and MacDown on my MacBook Air so I can easily copy and paste the output into either of those editors and save it as a Markdown file on my computer. However, when I want to share those files with colleagues who are unfamiliar with Markdown I need a way to convert those files into a format that’s easily accessible for them. My Markdown editors can only export the Markdown files as HTML or PDF.

That problem is easily solved with Pandoc which is a great tool that anyone can install on Linux, MacOS or Windows that easily converts Markdown into any number of different formats. Easily install Pandoc on Linux with the following commands:

$ sudo dnf install pandoc
or 
$ sudo apt install pandoc

On MacOS use Homebrew to easily install Pandoc by opening a terminal and entering the following command.

$ brew install pandoc

You can install Pandoc on Microsoft Windows using Chocolatey with the following command:

choco install pandoc

Once the application is installed it works very well from the command line. The project mainains great documentation. I needed to convert a Markdown document to .docx so my Windows using colleagues could easily read the output from the LLM I had been using. I used the documentation and the programs man page on my Linux system to enter the following command.

$ pandoc -o ProposedLease.docx -f markdown -t docx LeaseReview.md

The conversion was flawless and occured in a second or two. Your experience may vary based on CPU, RAM and length of the document converted. The Markdown file of our lease review was converted to “ProposedLease.docx” which I could easily share with my colleagues who were using Microsoft Word.

If you are a person who is uncomfortable installing software on your computer or you are an unpriviledged user in a corporate or educational setting you can use Pandoc online. Pandooc is open source with a GPL v2 license. For more information about Pandoc be sure to consult their website and examine all of the options that the program provides you.

Evaluating writing using open source artificial intelligence

In today’s digital age, writers seek tools that enhance their craft and provide real-time feedback and assistance. Enter Ollama – an open-source machine learning system designed to democratize accessibility for natural language processing tasks across a wide range of languages and scripts with ease. Coupled with the Phi3 model, this powerful duo promises unparalleled benefits in refining your writing style through sophisticated evaluations beyond grammar checking. This article will explore how Ollama, integrated with phi3’s innovative language comprehension and assessment approach, revolutionizes the writer’s journey toward excellence. So buckle up as we embark on a transformative exploration of your writing potential using these advanced AI tools!

I have been experimenting with Ollama and various models designed to work with it. In this article, I wanted to use Ollama and Phi3 to demonstrate the power of open source to revolutionize and evaluate writing. Both Ollama and Phi3 are open source, both have MIT licenses, and they work well together. You need to have Ollama installed on your computer. My daily driver is an Intel NUC 11 with an i7 processor running Linux Mint Cinnamon. Though this is a powerful computer, running a large language model takes more than a great CPU. A powerful GPU would make the process a little quicker.

First, I needed to make sure that Ollama was installed. If you are on Linux, you do that by executing the following code in a terminal.

$ curl -fsSL https://ollama.com/install.sh | sh

The installation is very straightforward and then you will need to download the Phi3 model. Enter the following command into your terminal.

$ ollama pull phi3

This pulls the Phi3 model down to your computer. You can check to make sure it is there by entering the following code.

$ ollama list

You should see a list of all the models loaded on your computer.

NAME            	ID          	SIZE  	MODIFIED       
phi3:latest     	4f2222927938	2.2 GB	56 minutes ago	
cooking:latest  	59c8496c62aa	4.7 GB	7 days ago    	
llama3.1:latest 	42182419e950	4.7 GB	7 days ago    	

Now I am ready to begin. I wanted Ollama and the Phi3 model to develop a rubric to evaluate my writing sample. I need to enter two commands. First, I need to get the LLM running, which will open a REPL (Read Only Print Loop) prompt very similar to a Python REPL.

ollama run phi3:latest
>>> create a rubric to score college level writing.  

The model quickly created a rubric to score my writing some of which I have copied and pasted here:

“College Level Writing Rubric: Essay (20 points)

I. Introduction and Thesis Statement (4/20 = 2 points):

A student’s introduction must be clear, concise, engaging, and
well-structured. A strong thesis statement should identify the topic of
discussion clearly while presenting an argument or perspective that sets
up a logical progression throughout the essay. Students will lose one
point for each instance where these two components fail to establish
context effectively (e.g., unclear writing style, unconvincing tone).

1 Point – Clear and engaging introduction with well-structured paragraphs setting the stage of discussion or argumentation


2 Points – Introduction is structured clearly but fails in some areas such as clarity of language use or lacking an enticing hook to captivate readers’ interest. (Lose one point)


1 Point – Thesis statement presented, but lacks precision and clear articulation about the topic discussed; does not establish a logical thread for essay progression

2 Points – Clearly stated thesis that presents argument or perspective while setting up an effective structure to follow throughout. (Lose one point)

0 PointS – Introduction is unclear, disjointed language use fails in
providing context and lacks persuasive tone necessary; no clear indication
of what the essay will discuss

The rubric, over seven hundred words long, was generated in a few minutes. All text output from Ollama is in MarkDown. The rubric will be rendered much quicker if you are a developer using an Apple Silicon MacBook and an M2 processor. Though exceptionally powerful, the NUC 11 with the i7 lacks a GPU, which eases the strain on the CPU.

Now that the rubric has been created, I entered the following command in the Ollama REPL to evaluate an article I wrote for Opensource.com nearly ten years ago.

>>>Use the rubric to evaluate this article https://opensource.com/education/15
... /12/my-open-source-story-don-watkins

The process took a few minutes and provided an in-depth review of my article. The evaluation was very detailed and over nine hundred words long. I have pasted the first part of the review here.

“2 Points – Introduction engages with opening hook; however, lacks clarity in setting the stage and doesn’t align well with Watkins’ thesis statement (lose up to three points) 1 Point – Thesis statement present but vague or imprecise about what readers should expect throughout this article. Lacks clear alignment between I and II components of essay-like structure; no explicit roadmap provided for reader follow along (lose two maximum points); fails in captivating the audience right from introduction


0 PointS – Introduction lacks coherence, disjointed language use provides little context or interest to readers about open source contributions. No engaging hook presented nor clear alignment between I and II components of essay-like structure; does not provide explicit roadmap for reader follow along (lose one point)…”

Using large language models to assess writing could offer the subtlety writers require to enhance their writing. Are there potential issues? Will artificial intelligence models replace copywriters? What other implications might they have that change how we write and re-write?