Mastering space management with ‘dfc’

The dfc command in Linux is a powerful tool that provides users with file system space usage information. It is a tool similar to df which provides a snapshot your file system. Here is a look at my file system using the df command:

don@Jude:~$ df
Filesystem     1K-blocks      Used Available Use% Mounted on
tmpfs            6548596      2272   6546324   1% /run
/dev/nvme0n1p2 959786032 288646724 622311100  32% /
tmpfs           32742976     84192  32658784   1% /dev/shm
tmpfs               5120        12      5108   1% /run/lock
efivarfs             192       125        63  67% /sys/firmware/efi/efivars
tmpfs           32742976         0  32742976   0% /run/qemu
/dev/nvme0n1p1    523248      6284    516964   2% /boot/efi
tmpfs            6548592       208   6548384   1% /run/user/1000

I can tell with a quick glance that I still have a lot of space available on my system. Using dfc offers additional features such as color-coded output and graphical representations, making it easier to visualize disk usage at a glance. Here is a quick look at my system using dfc:

Screen picture by Don Watkins CC by SA 4.0

You can easily see that dfc provides more information and it color and in a format that is more readable for the user. You can turn off the default color option by issuing the following command:

$ dfc -c never

This provides a non color read out of the same data.

Screen picture by Don Watkins CC by SA 4.0

You can display all the file systems including pseudo, duplicate and inaccessible filesystems by using the following command:

$ dfc -a 
Screen picture by Don Watkins CC by SA 4.0

The dfc command was not included with my distribution and had to be installed from the command line for Ubuntu based distributions.

$ sudo apt install dfc

installation for .rpm based distributions would be the following:

$ sudo dnf install dfc

The command is open source with a BSD-Clause License. You can export the output of the command in HTML, JSON, Tex and CSV formats. The man page provides excellent documentation and explanation of the various switches for the command. Use the dfc -h command display an excellent help menu with all the options for the command.

The Perfect Open Source Solution for Creating Stunning Photo Collages on Linux

Open source software is amazing and it seems like there are always solutions that come in handy when I look hard enough. Today was one of those days when a internet search for a query I read on Mastodon netted another open source solution. Mastodon user, Bryan Mitchell asked “Are there any photo editing software programs out there where you can put a set of photos into a collage?” Bryan had used Google’s Picasa formerly but that is no longer available. A quick search revealed an open source project called Photocollage. It is an easy to use software package written in Python with a GPL 2.0 license. It was easy for me to find the correct install script for Linux Mint.

$ sudo apt install photocollage

You can also install Photocollage on RPM based distributions by using the following command:

$ sudo dnf install photocollage

You can elect to install it with Python by using the following command.

sudo pip3 install photocollage

It can be launched from the command line or in my case from the “Graphics” submenu of my Cinnamon desktop. Click on the ‘Add images..’ button and start adding pictures to your collage.

Screen Picture by Don Watkins CC by SA 4.0

In short order I added a number of pictures from a directory on my computer. There is a button to ‘Save poster’, two buttons in the middle of the panel to ‘go back’ or ‘go forward” and one more at the end to ‘Regenerate’ the collage.’

Screen Shot by Don Watkins CC by SA 4.0

The last button on the panel is for ‘Settings’ to allow you to choose different picture sizes and templates. You can select border thickness and background color.

Screen picture by Don Watkins CC by SA 4.0

This software is easy to use and the results a very good. Images can be saved in ten different formats including: bit map paint, jpeg, gif, Microsoft Paint, pcx, png, tga, tiff, WebP and xbm. The software integrates with GNOME and is available in English, French, German, Czech, Italian, Bulgarian, Dutch, Russian, Spanish, Polish and Ukrainian. Lead developer is Adrien Verge and there are over twenty other contributors.

The shift to ARM and the rise of Linux integration

Apple’s computers switched from Intel X86 to ARM in 2020 when the company nnounced the M1 in November 2020. Since that time they have continued to release more ARM processors and now they have the M4 and M4 plus. ARM which stands for Advanced RISC machine. ARM is a CPU that uses a reduced instruction set and does not require a separate GPU. All the processing occurs on one chip. ARM processors are designed to be cost effective, consume less power and generate less heat than their X86 counterparts.

Not only does the ARM processor consume less power and cost less but it also has the processing power to effectively power large language models on individual personal computers and AI image generating software very quickly and robustly. Since Apple introduced the ARM processors in their computers there has been a drive to bring ARM to Linux. The Asahi Linux project aims to bring the power of Linux to Apple Silicon Macs. Pinebook Pro has brought Linux to an ARM processor powered notebook computer. Their machine is meant to deliver a solid day-to-day Linux experience.

Manjaro comes preinstalled on Pinebook Pro which can also run Debian, Arch, Armbian, BSD, Gentoo, Fedora, OpenSuse and Q4OS. Despite the progress there is still a shortage of ARM equipped computers for Linux folks to use. System76 recently announced a server line which will be powered by ARM chips. For now Linux users will have to satisfy themselves with the Pinebook Pro and Raspberry Pi 5 which are both wonderful examples of ARM computers.

Microsoft’s Surface Pro is another great example of an ARM powered mobile computer but to date I have not read of anyone installing Linux on it.

Ensuring Data Security Through Disk Erasure

Many people choose to encrypt their disk drives because it is one way of ensuring that your data stays secure and safe from the prying eyes of others. I always shy away from encrypting my disk because I don’t have theneed for that kind of security. When one of my computers reaches end or life or I decide to sell it then I take special measures to ensure that all the information is erased. II am also frequently called on to help clients to help them dispose of an old computer when they purchase a new one. What do you do when selling a computer or replacing an old spinning rust drive with a newer solid state drive? That’s when I think of securely erasing them to ensure that confidential information is removed before repurposing or disposing of them. 

Fundamentally, disk erasure on Linux serves as a versatile solution that tackles security, compliance, performance, and sustainability needs, catering to the varied demands of users. Whether for individual usage or organizational requirements, disk erasure is a forward-thinking strategy in data management and information security.Here are five commands to erase a disk on Linux:

Here are five command sequences to ensure that data is securely erased from your Linux data drive(s).

dd command:

$ sudo dd if=/dev/zero of=/dev/sdX bs=1M

This command writes zeros to the entire disk, effectively erasing all data.

shred command:

$ sudo shred -v /dev/sdX

The shred command overwrites the disk multiple times, making data recovery very difficult.

wipe Command:

$ sudo wipe -r /dev/sdX

The wipe command is designed to securely erase disks by overwriting them with random data.

blkdiscard Command (for SSDs):

$ sudo blkdiscard /dev/sdX

This command discards all data on the specified SSD, effectively erasing it.

parted and mkfs Commands:

$ sudo parted /dev/sdX mklabel gpt
$ sudo mkfs.ext4 /dev/sdX

Using parted to create a new partition table followed by mkfs to format the disk erases the existing data.

Replace /dev/sdX with your actual disk identifier. Always double-check the device identifier before running any of these commands to avoid accidental data loss.

Fastfetch: High-Performance Alternative to Neofetch for System Information Display

Yesterday I wrote about Neofetch which is a tool that I have used in the past on Linux systems I owned. It was an easy way to provide a good snapshot of the distribution I was running and some other pertinent information about my computing environment. One of my readers replied to let me know that the project was no longer being maintained. It was last updated in August 2020. The commenter suggested that I check out Fastfetch. I thanked the reader and followed the link he provided to the Github repository for Fastfetch.

The project maintains that it is, “An actively maintained, feature-rich and performance oriented, neofetch like system information tool.” It is easy to install and provides much of the same information that was provided by Neofetch. However, it does supply your IP address but the project maintains that presents no privacy risk. The installation for Fedora and RPM based distributions is familiar by entering the following command.

$ sudo dnf install fastfetch

If you are a Ubuntu based distribution like my Linux Mint daily driver then the installation requires the download of the appropriate .deb file. Once the package was installed on my system I decided to try it.

Screen picture by Don Watkins CC by SA 4.00

Fastfetch can be easily installed on a MacOS with Homebrew. I decided to try it on my MacBook.

Screen picture by Don Watkins CC by SA 4.0
% brew install fastfetch

Fastfetch is written in C with 132 contributors. It is open source with an MIT license. In addition to Linux and MacOS systems you can install Fastfetch on Windows with Chocolatey. The project states that Fastfetch is faster than Neofetch and it is actively maintained. Fastfetch has a greater number of features than it’s predecessor and if you want to see them all enter the following command. For more information and examples be sure to visit the project wiki

In search of the right GPU

I rely on my powerful Intel NUC with an i7 processor and 64 GB of RAM for my daily computing needs. However, it lacks a GPU, which makes it unsuitable for the experimentation I’ve been conducting with locally hosted large language models. To address this, I use an M2 MacBook Air, which has the necessary power for some of these tasks.

I had helped some local folks purchase a refurbished Dell computer from a refurbisher. They began to experience difficulty with it in a couple of months and when they did it was beyond the ninety day warranty. Rather than see them lose their money I wrote them a check for the original purchase price.

I believe that when you do good things that you will be rewarded in some fashion. I helped these folks purchase a new Dell Inspiron desktop which has a full factory warranty and when I was about to leave their home they asked me if I wanted to take the defective computer. I thought I might be able to fix it or use it for parts. I removed the cover and discovered that this Optiplex 5060 with an i5 CPU didn’t have a traditional hard drive like I had thought but instead was equipped with a Western Digital SN 270 NVME drive. I also discovered that the only thing wrong with the unit was a bad external power switch. Once I removed the front bezel I was easily able to power the device on.

Karma was working once again in my favor as I have found it does when you do for others as youu would have them do for you. I erased the Windows 11 install and installed Linux Mint 22 in it’s place. This unit also had two open low profile expansion slots and I wondered if I could find a graphics card with a GPU that would allow me to experiment with Ollama and other LLMs. I did some research and decided to purchase a XFX Speedster SWFT105 Radeon RX 6400 Gaming Graphics Card with 4GB from Amazon. The card came a couple days later and I installed it in one of the expansion slots.

After installing the card I placed the cover back on the machine, connected a spare Sceptre 27 inch display and an ethernet cable to it and downloaded Ollama and the Phi3 model. I downloaded and installed the ROCm modules which are helped Ollama to recognize the GPU. Ollama states that it recognizes the GPU when it finished installing the software. I think Ollama and the Phi3 module run faster with this unit. But maybe that’s wishful thinking. I also wanted to try Stable Diffusion on this computer and used Easy Diffusion which I have installed on the NUC before. I was frustrated to discover that my RX6400 card and GPU don’t work with EasyDiffusion. Am I missing something? Is there a fix?

I hope that if you’re reading this and you know of a fix for this issue that you would share it. I’d love to find and answer. Nonetheless, doing good for others always results in good coming back to you.

Breaking Free from the Cloud: Exploring the Benefits of Local,Open-Source AI with Ollama

Everywhere you look, someone is talking or writing about artificial intelligence. I have been keenly interested in the topic since my graduate school days in the 1990s. I have used ChatGPT, Microsoft Copilot, Claude, Stable Diffusion, and other AI software to experiment with how this technology works and satisfy my innate curiosity. Recently, I discovered Ollama. Developed by Meta, it is an open-source large language model that can run locally on Linux, MacOS, and Microsoft Windows. There is a great deal of concern that while using LLMs in the cloud, your data is being scraped and reused by one of the major technology companies. Ollama is open-source and has an MIT license. Since Ollama runs locally, there is no danger that your work could end up in someone else’s LLM.

The Ollama website proclaims, “Get up and running with Large Language Models.” That invitation was all I needed to get started. Open a terminal on Linux and enter the following to install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

The project lists all the models that you can use, and I chose the first one in the list, Llama3.1. Installation is easy, and it did not take long to install the Llama3.1 model. I followed the instructions and, in the terminal, entered the following command:

$ ollama run llama3.1

The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was prompted to ‘talk’ with the LLM. I decided to ask a question about the history of my alma mater, St. Bonaventure University. I entered the following commands:

$ ollama run llama3.1
>>>What is the history of St. Bonaventure University?

The results were good but somewhat inaccurate. “St. Bonaventure University is a private Franciscan university located in Olean, New York. The institution was founded by the Diocese of Buffalo and  has a rich history dating back to 1856.” St. Bonaventure is located near Olean, New York, and it is in the Diocese of Buffalo, but it was founded in 1858. I asked the model to name some famous St. Bonaventure alumni; more inaccuracies were comic. Bob Lanier was a famous alumnus but Danny Ainge was not.

The results are rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air which I soon did. I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.

Unsurprisingly, Ollama uses a great deal of processing power, which is lessened if you run it on a computer with a GPU. My Intel NUC 11 is a very powerful desktop computer with quad-core 11th Gen Intel Core i7-1165G7, 64 gigabytes of RAM, and a robust connection to the internet to download additional models. I posed similar questions to the Llama3.1 model first on the Intel running Linux and then on the M2 MacBook Air running MacOS. You can see the CPU utilization below on my Linux desktop. It’s pegged, and the output from the model is slow at an approximate rate of 50 words per minute. Contrast that with the M2 MacBook, which has a GPU with a CPU utilization of approximately 6.9% and words per minute faster than I could read.

Screen picture by Don Watkins CC by SA 4.0

While Ollama Llama3.1 might not excel at history recall, it does very well when asked to create Python code. I entered a prompt to create Python code to create a circle without specifying how to accomplish the task. It rendered the code shown below. I had to install the ‘pygame’ module, which is not on my system.

$  sudo apt install python3-pygame
# Python Game Development

import pygame
from pygame.locals import *

# Initialize the pygame modules
pygame.init()

# Create a 640x480 size screen surface
screen = pygame.display.set_mode((640, 480))

# Define some colors for easy reference
WHITE = (255, 255, 255)
RED = (255, 0, 0)

while True:
    # Handle events
    for event in pygame.event.get():
        if event.type == QUIT or (event.type == KEYDOWN and event.key == 
K_ESCAPE):
            pygame.quit()
            quit()

    screen.fill(WHITE)  # Fill the background with white color

    # Drawing a circle on the screen at position (250, 200), radius 100
    pygame.draw.circle(screen, RED, (250, 200), 100)
    
    # Update the full display Surface to the screen
    pygame.display.flip()

I copied the code into VSCodium and ran it. You can see the results below.

Screen picture by Don Watkins CC by SA 4.0

As I continue experimenting with Ollama and other open-source LLMs, I’m struck by the significance of this shift toward local, user-controlled AI. No longer are we forced to rely on cloud-based services that may collect our data without our knowledge or consent. With Ollama and similar projects, individuals can harness the power of language models while maintaining complete ownership over their work and personal information. This newfound autonomy is a crucial step forward for AI development and I’m eager to see where it takes us.

Five things you can do with the nano editor

In the early stages of my experience with Linux servers, I had to learn how to edit text files using the command line. While there are other powerful text editors in Linux, such as vi and vim, I found Nano to be particularly useful. Nano is a simple yet powerful text editor that comes pre-installed on many Linux distributions. You can easily install it from the command line if it’s not pre-installed on your system.

Debian-based systems:

$ sudo apt install nano

RPM based systems:

$ sudo dnf install nano

Basic Text Editing

Nano is a user-friendly text editor designed for simple and efficient text editing. To open a file, type “nano” followed by the file name in the terminal. Once inside, you can begin typing or editing text immediately. Navigation is easy, using the arrow keys to move around. To save your changes, press Ctrl + O; to exit, press Ctrl + X.

Screen picture by Don Watkins CC by SA 4.0

Search and Replace

Nano has a valuable search and replace feature. To search for a specific term, press Ctrl + W, type your search term, and press Enter. To replace text, press Ctrl + \, enter the text you want to replace, followed by the new text. This feature is handy for quickly updating configuration files or scripts.

Undo and Redo

Mistakes happen, but Nano makes it easy to correct them with its undo and redo functionality. Press Alt + U to undo the last action and Alt + E to redo it. This feature ensures that you can quickly revert changes without losing your progress

Syntax Highlighting

Nano offers syntax highlighting for those working with code, making reading and editing code easier. Syntax highlighting is available for various programming languages and can be enabled by adding the appropriate syntax files to your Nano configuration.

Screen picture by Don Watkins CC by SA 4.0

Custom Key Bindings

Nano enables you to customize key bindings to match your workflow. You can edit the /etc/nanorc file to modify default key bindings or add new ones. This flexibility allows you to personalize the editor based on your specific requirements, enhancing your editing experience and making it more efficient.

Nano’s simplicity and powerful features make it a great choice for text editing in Linux. Whether editing configuration files, writing scripts, or taking notes, Nano has the tools to do the job efficiently.

The Freedom of Linux: A World Beyond Hardware Restrictions

In the ever-evolving world of technology, software updates often bring excitement and anticipation as they promise new features and improvements. However, with operating system updates for proprietary operating systems, the excitement can be tempered by stringent hardware requirements that leave many users facing the inevitable need for a new computer. Fortunately, an alternative,  the Linux kernel which powers the many Linux distributions and open source, allows users to embrace the latest software without hardware limitations.

A Diverse Landscape of Compatibility

Unlike proprietary operating systems with strict hardware prerequisites, Linux distributions offer fresh air. Whether you choose Pop!_OS, Fedora, or Linux Mint, Linux’s open nature ensures compatibility with a wide range of hardware, even aging systems. This remarkable flexibility is a testament to the power of open-source software.

Take, for instance, the case of the Darter Pro laptop from System76, acquired in early 2019 with Pop!_OS 18.10 pre-installed. Despite the years that have passed, this hardware continues to support the latest versions of not just Pop!_OS but also Ubuntu, Fedora, and Arch without breaking a sweat. Such an upgrade would be an unattainable dream if one attempted to install Windows 11 on the same machine. Likewise, the closed ecosystem of MacOS locks users into a world where they can only experience the latest software if they invest in Apple’s proprietary hardware.

The Hidden Treasure of Open Source

Regrettably, many people remain oblivious to the hidden treasure trove that is open-source software. Beyond the Linux kernel that forms the foundation of countless distributions, a vast ecosystem of applications thrives, often outperforming their proprietary counterparts. This abundance of high-quality, open-source software is built on principles prioritizing user freedom and choice.

For instance, consider the MarkText application, a tool I use to craft this article. It’s an exemplary testament to the capabilities of open-source software. With abundant features, a user-friendly interface, and an active community of developers and users, MarkText competes toe-to-toe with proprietary alternatives without any vendor lock-in or hardware mandates that plague proprietary systems. This is the essence of open source—a realm in which the user controls.

Breaking the Chains of Vendor Lock-In

Vendor lock-in is a pervasive challenge in the technology world. Proprietary software and hardware vendors often design their products to ensure consumers remain captive to their offerings. This strategy serves the interests of these companies. Still, it can be detrimental to the user, who may be in a never-ending cycle of purchasing new hardware to stay current.

In contrast, Linux and open-source software operate under a different ethos. They empower users to take control of their technology. With the freedom to choose software and customize their experience, users are no longer chained to a specific vendor’s roadmap. This approach breaks the cycle of forced obsolescence and keeps hardware relevant for years, ultimately saving users money and reducing electronic waste.

A Sustainable Approach

In an era of increasing environmental consciousness, the longevity of hardware takes on added importance. The “throwaway culture” of rapid hardware turnover is financially wasteful and environmentally unsustainable. By embracing Linux and open-source software, users can extend the lifespan of their hardware, contributing to a more sustainable future.

Additionally, the open-source community fosters collaboration and innovation without the limitations of proprietary systems. Developers worldwide work together to create secure, stable, and feature-rich software, often outpacing the development cycles of their proprietary counterparts. This collaborative spirit ensures that Linux users can access cutting-edge technology without the need for frequent hardware upgrades.

Conclusion

In the world of technology, where operating system updates often come with stringent hardware requirements, Linux stands as a beacon of freedom and sustainability. Its compatibility with a wide range of hardware, commitment to open-source principles, and freedom from vendor lock-in make it a compelling choice for those who wish to break free from the shackles of constantly upgrading their hardware.

As we navigate an ever-changing technological landscape, let us remember that there is a world beyond hardware restrictions, a world where Linux and open-source software offer an oasis of choice and longevity. In this realm, the user is king, and technology serves their needs, not vice versa. So, next time you hear the siren call of a new operating system update, consider the boundless possibilities of Linux and liberate yourself from the cycle of forced obsolescence.

WoeUSB-ng to the rescue

Frequently, I’m approached by individuals seeking assistance in rescuing Windows computers that have encountered locking or damage issues. I occasionally utilize a Linux USB boot drive to access Windows partitions effectively. This enables me to transfer and safeguard files from these compromised systems securely.

Sometimes, clients misplace their passwords or lock themselves out of their login accounts. One viable method to restore account access involves generating a Windows boot disk to initiate repairs on the computer. Microsoft provides the option to obtain Windows copies via its official website and tools designed for crafting a USB boot device. However, utilizing these tools necessitates access to a Windows computer, posing a challenge for me as a Linux user. Consequently, I’ve sought alternative approaches for creating a bootable DVD or USB drive. My go-to tools, such as Etcher.io, Popsicle (for Pop!_OS), UNetbootin, and even utilizing the command line utility ‘dd’ for crafting bootable media, have yielded limited success. Since my daily driver is Linux, it was near impossible to create a USB drive with a bootable Windows version.

A few years ago, I learned about WoeUSB and the subsequent project WoeUSB-ng. WoeUSB-ng is a software utility used for creating bootable Windows USB drives using Windows ISO images and effectively transferring them onto a USB drive, making it possible to install or repair Windows operating systems from that USB drive. On Linux systems, the WoeUSB-ng software package. The “ng” in its name stands for “next generation,” indicating that it’s a successor or evolution of the original WoeUSB tool. I have used it to create bootable Windows drives with both Windows 10 and Windows 11. WoeUSB-ng is open source with a GPL v3 license.

The project website lists several install options for Linux users.

Fedora users can use the following commands to install the software necessary to support WoeUSB-ng.

sudo dnf install git p7zip p7zip-plugins python3-pip python3-wxpython4

Ubuntu/Linux Mint users can use the following commands to install the software necessary to support WoeUSB-ng.

sudo apt install git p7zip-full python3-pip python3-wxgtk4.0 grub2-common grub-pc-bin parted dosfstools ntfs-3g

Then issue the following commands to install WoeUSB-ng on your system.

git clone https://github.com/WoeUSB/WoeUSB-ng.git
cd WoeUSB-ng
sudo pip3 install .

Once the software is installed, creating a bootable Windows drive is very straightforward.

Click install, and depending on the processor and RAM in your machine, you should have a bootable Windows 10 or Windows 11 drive in very little time. This article is adapted from Use this bootable USB drive on Linux.