Introducing Jan: A Privacy-Focused, Locally Hosted Open Source GPT for AI Enthusiasts

Are you looking for a locally hosted open source GPT that protects your privacy while allowing to leverage the latest AI models? Then you need to download Jan. Jan is easy to install on Linux, macOS and Windows and has excellent documentation to guide that process.

Jan is a ChatGPT alternative that runs 100% offline on your desktop. The goal is to make it easy for anyone, with or without coding skills, to download and use AI models with full control and privacy. Jan operates on a truly open-source model with an Apache 2.0 license. It stores all data locally, so internet usage is optional since it can function entirely offline. Users have the flexibility to choose AI models, both local and cloud-based, without the worry of data being sold. Jan is powered by Llama.cpp, a local AI engine that offers an OpenAI-compatible API This enables you to utilize AI capabilities in various applications on your laptop or PC.

I downloaded and installed Jan on Linux Mint 22.1 using the deb file. If you are using a non-Debian based Linux like Fedora you can install the app image file. I also downloaded and installed Jan on my M3 MacBook Air. The project has excellent documentation which makes it easy to get started using Jan. Be sure to consult it.

As suggested in the documentation I downloaded and installed the Jan-nano-128K model. The project provides excellent resources which helped me to learn how to use the LLM. I decided to see if it could give me the code for a simple web app that converted Fahrenheit to Celsius using Python and Flask. The model took about fifteen seconds and then gave me the code which I copied and pasted into VSCodium and saved it as a Python file. The model suggested I install Flask and gave me the code pip3 install Flask. I saved the file as directed and then presented me with the dialogue below. The result was two files which I saved in the Python virtual environment on my computer.

The Python file:

from flask import Flask, request, render_template

app = Flask(__name__)

@app.route('/', methods=['GET', 'POST'])
def convert_temp():
    result = None
    if request.method == 'POST':
        fahrenheit = float(request.form['fahrenheit'])
        celsius = (fahrenheit - 32) * 5/9
        result = celsius
    return render_template('convert.html', result=result)

if __name__ == '__main__':
    app.run(debug=True)

The second file was an HTML file which was stipulated in the code Jan produced.

<!DOCTYPE html>
<html>
<head>
    <title>Fahrenheit to Celsius Converter</title>
</head>
<body>
    <h1>Fahrenheit to Celsius Converter</h1>
    <form method="post">
        <label for="fahrenheit">Enter Fahrenheit:</label>
        <input type="text" id="fahrenheit" name="fahrenheit" required>
        <button type="submit">Convert</button>
    </form>
    {% if result is not none %}
    <h2>Result: {{ result }}°C</h2>
    {% endif %}
</body>
</html>

I copied both the code snippets into VSCodium and saved them to the Python virtual environment which i created using the following command:

python3 -m venv temperature

I opened a terminal and gave the following command python3 temperature.py. Then I opened my browser and pointed it to https://127.0.0.1:5000 as directed and was presented with the following simple web app that I had requested the model to create.

Screen picture by Don Watkins CC by SA 4.0

Reading the project’s extensive documentation is crucial and I also found that the community maintains a number of useful resources including Github, Discord, X and LinkedIn. The project has a blog with a number of useful resources too.

AI Voice Generation Made Easy with Pinokio and OpenAudio

Are you a scientist, developer or just a tinkerer like me? Are you fascinated with the power of AI to generate and clone human voice to include in your work. OpenAudio might be what you are looking for. Leveraging the power of Pinokio it’s easy to download and install OpenAudio on your computer. In this brief introduction I am using an M3 MacBook Air with 16 GB RAM. Follow these instructions to install Pinokio on your computer and discover how easy AI generated speech can become. Pinokio is a browser that enables you to install, run, and automate any AI on your computer.

Now that Pinokio is installed I just click on the ‘Discover’ button at the top right side of the application browser and look for OpenAudio which is the first application listed in the Apps section. Pinokio. is open source with an MIT license and OpenAudio is open source with an Apache 2.0 license. It is based on FishSpeech and has recently rebranded itself as OpenAudio.

Screen picture by Don Watkins CC by SA 4.0

The project has seventy-seven contributors and states on their website that: “We are incredibly excited to unveil OpenAudio S1, a cutting-edge text-to-speech (TTS) model that redefines the boundaries of voice generation. Trained on an extensive dataset of over 2 million hours of audio, OpenAudio S1 delivers unparalleled naturalness, expressiveness, and instruction-following capabilities.”

This model was easy to install on Pinokio and you can quickly and easily start producing your own AI generated speech with it. Your experience may vary depending on your processor and RAM.

Screen picture by Don Watkins CC by SA 4.0

Once installed you will be presented with this easy to use interface.

Screen picture by Don Watkins CC by SA 4.0

These four lines of text generated the audio in 77 seconds in wav format and resulted in 8 seconds of audio in a 684 KB file. There is a download button at the top right of the playback window.

Listen to the audio and judge for yourself.

In addition to text to speech synthesis OpenAudio supports voice cloning. You can use your own voice or upload a sample. Five to ten seconds of reference audio is useful for the generation of the cloned voice. There is a dialogue box at the lower left of the display where this is accomplished along with other controls that override the default settings.

Use of this model is governed by Creative Commons CC by NC-SA 4.0. The project also includes a caveat:

“We do not hold any responsibility for any illegal usage of the codebase. Please refer to your local laws about DMCA and other related laws.”

The model is a text-to-speech model based on VQ-GAN and Llama developed by Fish Audio. There are links to the source code and models. The project maintains a Discord channel and a presence on X. Visit the OpenAudio blog for up to date information and research.

Have some fun and install Pinokio and OpenAudio on your computer today. Leverage the power of open source and AI in your projects and join their community of developers if you are inclined.

Simplifying local AI with Pinokio

Are you looking for a way to leverage AI without having to be a developer or an experienced coder then Pinokio is just what the doctor ordered. Best of all you can run Pinokio on your own computer so you don’t have to sacrifice your privacy. Pinokio stands out as a revolutionary tool that merges the power of open-source automation with the simplicity of a browser interface. Built with developers and curious tinkerers in mind, Pinokio is redefining what it means to use a browser—not just to explore the internet, but as a platform that allows even inexperienced users the ability to download and launch AI applications that ordinarily would require lots of know how and skill.

Using Pinokio you can easily install, run, and automate any AI tool on your computer. Anything you can execute in the command line can be streamlined using Pinokio scripts—all through an intuitive, user-friendly interface. You can use Pinokio to install AI apps, manage and run those apps, create workflows for installed AI apps. There is lots of help available to help you get started with Pinokio by following @cocktailpeanut on X or joining the Pinokio Discord to ask questions. Pinokio is open source with an MIT license.

The project has detailed directions for installation your operating system and supports Windows, macOS and Linux. I chose both the Linux and macOS installs. Pinokio is supported on both the legacy Intel Mac and the Apple Silicon Mac. If you are a Linux user like me follow this link to find either the deb or rpm package for easy installation or the source code to compile the application for yourself.

I also installed Pinokio on my M3 MacBook Air. The Apple Silicon install is a little trickier but once you follow the excellent documentation you will be up and running. Once the initial application is installed and launched on either platform, the Pinokio environment is automatically and seamlessly set up during the first launch. When that was accomplished I was eager to dive in and discover what AI applications I could use. At the top of the Pinokio browser there is a ‘Discover’ button that takes the user to a number of applications that can be setup and launched. When the discover button is pressed the user is presented with the following display of News and Apps that can be loaded.

Screen picture by Don Watkins CC by SA 4.0

There are dozens of AI apps with certified scripts that can be installed. There are almost too many to choose from. I knew from experience that though my Linux computer is an i7 with 64 GB RAM that it lacks a GPU so running AI apps on that platform would mean the processing would be slow. I elected to use the M3 MacBook with 16 GB RAM. Much faster processing. My first choice was FaceFusion. FaceFusion has an Open RAIL-S license

Screen picture by Don Watkins CC by SA 4.0

FaceFusion is a powerful tool for face swapping and enhancement. I decided to install it with Pinokio on my Apple Silicon Mac. Using Pinokio it is easy to install this AI app by clicking the ‘One-Click install’ button and waiting a short period of time to install the app. Once the app is installed in Pinokio it can easily be launched from the browser.

Screen picture by Don Watkins CC by SA 4.0

Once FaceFusion is launched I am presented with a menu interace to choose how I will run the application.

I chose ‘Run Default’ and I am presented with an elegant yet easily managed interface for enhancing facial images. In the browser I can see that FaceFusion is running on port 7860 on the localhost.

Screen picture by Don Watkins CC by SA 4.0

Pointing my browser to localhost:7860 I can see the FaceFusion app running.

Screen picture by Don Watkins CC by SA 4.0

Now I can have some fun with faces. I chose to use another AI program to generate an image of a handsome guy with blonde hair and blue eyes like I used to have earlier in life. That is the source image I am inserting into the FaceFusion app. Then I inserted a recent picture of myself taken earlier this year.

Screen picture by Don Watkins CC by SA 4.0
Don Watkins wearing a scarf
Photo by Don Watkins CC by SA 4.0

Five seconds later after I clicked the ‘Start’ button at the bottom of the app I have the new me. Maybe someone will develop HairFusion too. Have some fun with your images and explore FaceFusion more thoroughly.

Image created by FaceFusion

Transforming Family Photos into Festive Holiday Cards with AI

There are several open-source tools available this year for creating holiday cards. If you have a wonderful photo of your family or grandchildren, but it was taken during a different season and you’d like to change the background, there’s a Python module called rembg that can help you with that. To get started, you’ll need to set up a Python virtual environment and install the necessary dependencies. I created a directory named rembg using the following command:

$ mkdir rembg 

Then I setup a Python virtual environment.

python3 -m venv /home/don/rembg

Then I activate the environment with the following command.

source /home/don/rembg/bin/activate

Then I install rembg:

pip3 install rembg

I want to use rembg from the command line so I make the following additional installation:

pip3 install "rembg[cli]"

Then I install onnxruntime which is a machine learning accelerator.

pip3 install onnxruntime

Now I am ready to remove the background from the image that I have chosen. This is a recent picture of my wife and I taken in the fall of the year. I like the picture but I want it to have a festive background.

Photo by https://www.sissyhorch.com/

I make sure that my image is in the rembg folder and then execute the following command. The i switch means that I am operating at the file level.

rembg i grandparents.jpg grandparents_no_bg.jpg

In the command above I renamed the output file so that I would still have my original just in case I wanted to use it again. You can see below that the background has been removed from the image above.

I can create a nice background for my card with InkScape and add some festive lettering and use a Pointsettia I downloaded from Openclipart.org. The completed card is shown below.

Created with InkScape, OpenClipart.org by Don Watkins CC by SA 4.0

Rembg is open source with an MIT License. Onnxruntime is open source with an MIT License.

Navigating the AI Revolution: Balancing Innovation, Privacy, and Open-Source Alternatives

Everywhere you look, whether in print or on the web, the rage is AI. I’m part of the group that sees potential in machine learning and how it might reshape our educational systems. All the major tech companies have embraced it, and at the same time, many folks are sure that it spells the end of authentic authorship. In addition to some of the slop created with artificial intelligence, there is a growing concern for our privacy. Some people allege that their original works are being used to train large language models without permission.

In the past couple of years, I have asked folks in higher education and K-12 if their institutions have policies stipulating how teachers and students can use this emerging technology that continues to proliferate. With few exceptions, such policies do not exist. There are the Luddites who refuse to acknowledge its presence, some who believe in ubiquity but have very few policies, and those who stipulate no policy.

Most major operating systems and many of their applications now incorporate AI features, making it challenging to avoid them. However, there is a solution: high-quality, freely accessible software. This solution consists of open-source software that does not include artificial intelligence algorithms. The best part is that you don’t have to give up your existing operating systems unless you choose to. If you decide to switch, I recommend considering one of the major Linux distributions, as they can help extend the life of your hardware and software.

LibreOffice is a comprehensive office suite that includes a word processor (Writer), a spreadsheet application (Calc), and presentation software (Impress). It allows you to save your work in open formats, ensuring you always have access to your documents. When using proprietary software, those programs save your work in formats that are inaccessible unless you continue to purchase a license for that product. LibreOffice uses open documents, which ensure that your job is always accessible to you. LibreOffice is also available on Linux, MacOS, and Windows and is open source.

GIMP (GNU Image Manipulation Program) is a fully featured alternative to proprietary photo editing software and includes embedded AI capabilities. Gimp is used for image manipulation, editing, free-form drawing, converting between various image file formats, and other specialized tasks. The software is extensible through plugins and supports scripting for enhanced functionality. It is open source with a GPL v. 3 license.

Inkscape is a free, open-source vector graphics editor available for Unix-compatible systems, including Linux, Windows, and macOS. It provides a robust set of tools and is widely used for creating artistic and technical illustrations, such as cartoons, clip art, logos, typography, diagrams, and flowcharts. Inkscape uses vector graphics to ensure sharp printouts and renderings at any resolution, unlike raster graphics, which are limited by pixel dimensions.

Blender is a robust, open-source software suite for 3D modeling and animation, extensively utilized across diverse industries such as animation, visual effects, art, and 3D printing. It provides a comprehensive array of modeling, texturing, sculpting, rigging, animation, rendering, compositing, and motion-tracking tools within a single, versatile application.

Audacity is a free and open-source digital audio editor for Windows, macOS, and Linux. In addition to recording audio from various sources, Audacity provides extensive post-processing features for all types of audio. These features include effects such as normalization, trimming, and fading in and out. Audacity can record multiple tracks at once. Audacity natively supports importing and exporting WAV, AIFF, MP3, Ogg Vorbis, and other formats compatible with the libsndfile library. However, due to patent licensing restrictions, the FFmpeg library required for handling proprietary formats like M4A (AAC) and WMA is not included with Audacity and must be downloaded separately.

VLC is a free, open-source, and portable media player and streaming server created by the VideoLAN project. It supports desktop operating systems and mobile platforms, including Android, iOS, and iPadOS.

Enhancing My Resume with AI: A Journey with Microsoft Copilot

A few days ago a friend sent me a message about an opportunity to work as a Maker Space coach at a local university. After discussing the opportunity with my wife she suggested I apply. I completed the online application and then toward the end of the process i needed to submit a resume. There used to be a way to use your Linkedin profile to generate a resume. They don’t offer that service anymore. I am a Canva subscriber and there is an appllication that is supposed to work but alas it wasn’t working tonight. I asked ChatGPT to create a resume with the link to my LinkedIn profile. ChatGPT won’t perform this. That’s when I tried Microsoft Copilot.

I asked Copilot if it could help me create a resume. It suggested that I drag and drop my resume into the conversation space. I searched my drive and found a resume I had written seven years ago for a graduate school application. It was a PDF. Copilot would not work with the PDF but suggested that I could use a JPG or PNG. I opened the resume document and took a screen picture of it and saved the file as PNG. Then I uploaded it to Copilot. I just a few seconds Copilot read my resume and printed it out on the display. Using Copilot I instructed it to add the new additions and corrections to my resume and it did a wonderful job of that. It was incredibly easy.

Copilot provides all of it’s output in Markdown. I decided to convert the Markdown to PDF so I copied and pasted the output into my favorite MarkText which is my favorite Markdown editor, saved the file and then exported it to a PDF. I uploaded the PDF with my application and submitted it. If you find yourself in a situation like I did tonight I suggest you try using Microsoft Copilot and open source tools like Screenshot on Linux Mint and Marktext

Educators to Follow on Mastodon for Innovative Teaching Insights

I have been using Mastodon for almost six years. I continue to be amazed at the quality of discourse and the diverse community of educators and folks interested in education in the Fediverse. If you are a person who is accustomed to algorithm driven centralized social networks then Mastodon is going to seem a bit unusual at first. If you are WordPress user you can connect your blog to Mastodon with the ActivityPub plugin. You don’t need a blog to connect to Mastodon. You just need to create an account on anyone of dozens of Mastodon instances that exist around the world. One you are connected to an instance you can find other users and connect to them whether they are on your particular server instance or not.

Mastodon communications are driven by hashtags which many of you are already familiar with. Some of my favorites are #edtech

A toot on Mastodon is typically five hundred characters long. Like other microblogging platforms you may have used before brevity is prized but there’s more than enough space to get your information across and then you use hashtags to let your audience know what your toot is about.

Here’s a list of twenty-one educators currently using Mastodon.

Eric Sheninger – @esheninger@mastodon.social
Sandy Kendell – @SandyKendell@mastodon.education
Wesley Fryer – @wfryer@mastodon.cloud
Martin Dougiamas – @martin@openedtech.social
Alice Barr – @alicebarr@techhub.social
Miguel Guhlin – @mguhlin@mastodon.education
EdTech Group – @edtech@chirp.social
Clint LaLonde – @clintlalonde@mastodon.oeru.org
Doug Holton – https://mastodon.social/@dougholton
Anna Millis – @amills@mastodon.oeru.org
Open at Virginia Tech – @openatvt@fosstodon.org
SPARC – @sparc@mastodon.social
Project Gutenberg – @gutenberg_org@mastodon.social
Smithsonian Magazine – @Smithsonianmag@flipboard.com
Steven Beschloss – @StevenBeschloss@mastodon.social
Bill Fitzgerald = @funnymonkey@freeradical.zone
WikiEducation – @WikiEducation@wikis.world
CreativeCommons – @creativecommons@mastodon.social
Edutopia – @edutopia@mastodon.education
Cognitively Accessible Math – @geonz@mathstodon.xyz
NPR – @npr@mastodon.social
Open Source Science – @os-sci@mastodon.social

In conclusion, Mastodon offers a refreshing alternative to traditional, algorithm-driven social networks. Its decentralized nature and vibrant community provide an enriching environment for educators and those passionate about education. Whether you’re sharing your thoughts, discovering new ideas through hashtags, or connecting your WordPress blog with the ActivityPub plugin, Mastodon opens up a world of possibilities. Embrace the change and dive into meaningful conversations on this unique platform. Happy tooting!

Open WebUI: A nuanced approach to locally hosted Ollama

Open WebUI offers a robust, feature-packed, and intuitive self-hosted interface that operates seamlessly offline. It supports  various large language models like Ollama and OpenAI-compatible APIs,  Open WebUI is open source with an MIT license. It is easy to download and install and it has excellent documentation. I chose to install it on both my Linux computer and on the M2 MacBook Air. The software is written in Svelte, Python, and TypeScript and has a community of over two-hundred thirty developers working on it.

The documentation states that one of its key features is effortless setup. It was a easy to install. I chose to use the Docker. It boasts a number of other great features including OpenAI API integration, full Markdown and Latex support, a model builder to easily create Ollama models within the application. Be sure to check the documentation for all nuances of this amazing software.

I decided to install Open WebUI with bundled Ollam support for CPU only since my Linux computer does not have a GPU. This container Image unites the power of both Open-WebUI and Ollama for an effortless setup. I used the following Docker install script copied from the Github repository.

$ docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

On the MacBook I chose a slightly different install opting to use the existing Ollama install as I wanted to conserve space on the smaller host drive. I used the following command taken from the Github repository.

% docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Once the install was complete I pointed my browser to:

http://localhost:3000/auth

I was presented with a login page and asked to supply my email and a password.

Screen picture by Don Watkins CC by SA 4.0

First time login will use the ‘Sign up’ link and provide your name, email and a password. Subsequent logins require email and password. After logging in the first time you are presented with “What’s New” about the project and software.

After pressing “Okay, Let’s Go!” I am presented with this display.

Screen picture by Don Watkins CC by SA 4.0

Now I am ready to start using Open WebUI.

The first thing you want to do is ‘Select a Model’ at the top left of the display. You can search for models that are available from the Ollama project.

Screen picture by Don Watkins CC by SA 4.0

On initial install you will need to download a model from Ollama.com. I enter the model name I want in the search window and press ‘Enter.’ The software downloads the model to my computer.

Screen picture by Don Watkins CC by SA 4.0

Now that the model is downloaded and verified I am ready to begin using Open WebUI with my locally hosted Phi3.5 model. Other models can be downloaded and easily installed as well. Be sure to consult the excellent getting started guide and have fun using this feature rich interface. The project has some tutorials to assist new users. In conclusion, launching into an immersive experience through their intuitively designed interface allows users of Open-WebUI to fully leverage its comprehensive array of features.

Exploring Hollama: A Minimalist Web Interface for Ollama

I’ve been continuing the large language model learning experience with my introduction to Hollama. Until now my experience with locally hosted Ollama had been querying models with snippets of Python code, using it in REPL mode and customizing it with text model files. Last week that changed when I listened to a talk about using Hollama.

Hollama is a minimal web user interface for talking to Ollama servers. Like Ollama itself Hollama is open source with an MIT license. Developed initially by Fernando Maclen who is a Miami based designer and software developer. Hollama has nine contributors currently working on the project. It is written in TypeScript and Svelte. The project has documentation on how you can contribute too.

Hollama features large prompt fields, Markdown rendering with syntax highlighting, code editor features, customizable system prompts, multi-language interface along with light and dark themes. You can check out the live demo or download releases for your operating system. You can also self-host with Docker. I decided to download it on the M2 MacBook Air and my Linux computer.

On Linux you download the tar.gz file to your computer and extract it. This opened a directory bearing the name of the compressed file, “Hollama 0.17.4-linux-x64”. I chose to rename the directory Hollama for ease of use. I changed my directory to Hollama and then executed the program.

$ ./holllama 

The program quickly launches and I was presented with the user interface which is intuitive to an extent.

Screen picture by Don Watkins CC by SA 4.0

At the bottom of the main menu and not visible in this picture is the toggle for light and dark mode. On the left of the main menu there are four choices. First is ‘Session’ where you will enter your query for the model. The second selection is “Knowledge” where you can develop your model file. Third selection is ‘Settings’ where you will select the model(s) you will use. There is a checkoff for automatic updates. There is a link to browse all the current Ollama models. The final menu selection is ‘Motd’ or message of the day where updates of the project and other news are posted.

Model creation and customization is made much easier using Hollama. In Hollama I complete this model creation in the ‘Knowledge’ tab of the menu. Here I have created a simple ‘Coding’ model as a Python expert.

Screen picture by Don Watkins CC by SA 4.0

In ‘Settings’ I specify which model I am going to use. I can download additional models and/or select from the models I already have installed on my computer. Here I have set the model to ‘gemma2:latest’. I have the settings so that my software can check for updates. I also can choose which language the model will use. I have a choice of English, Spanish, Turkish, and Japanese

Screen picture by Don Waktins CC by SA 4.0

Now that I have selected the ‘Knowledge’ I am going to use and the model I will use I am ready to use the ‘Session’ section of the menu and create a new session. I selected ‘New Session’ at the top and all my othe parameters are set correctly.

Screen pictire by Don Watkins CC by SA 4.0

At the bottom right of the ‘Session’ menu is a box for me to enter the prompt I am going to use.

Screen picture by Don Watkins CC by SA 4.0

You can see the output below that is easily accessible.

Screen picture by Don Watkins CC by SA 4.0

The output is separated into a code block and a Markdown block so that it is easy to copy the code into a code editor and the Markdown into an editor. Hollama has made working with Ollama much easier for me. Once again demonstrating the versatility and power of open source.

Taking a look at financial data with Ollama

Several weeks ago a person asked me to assist her with organizing her financial records to take them to a tax professional. This person does not use a financial program like GnuCash which could make that project much easier. Instead we downloaded a csv file from her bank and then she used Microsoft Excel to add a category to each expense. This was a tedious process. I used a pivot table to further organize and display her data which she took to the tax preparer.

Recently while working on other projects with Ollama I wondered if it might be possible to use a local large language model to accomplish the same task. It is easy to download Ollama. If you are a LInux user like I am you can enter the following command in a terminal.

curl -fsSL https://ollama.com/install.sh | sh

I experimented with phi3.5 and Llama3.2 and found the latter to work better for me. It is easy to pull the model down to your computer with the following command:

$ ollama pull Llama3.2

Once the model was downloaded to my computer I wanted to make my own custom model to analyze my financial data set which was a csv file from the bank. I created a model file which I called financial using nano. Here is the text of the modelfile I created for this activity:

FROM llama3.2

# set the temperature to 1 [higher is more creative, lower is more coherent]

PARAMETER temperature .6

# set the system message

SYSTEM “””

You are a financial analyst. Analyze the financial information I supply.

“””

I used the model file to to create the custom model for this financial analysis. I set the temperature PARAMETER to .6 to make the work more accurate. I entered the following command in the terminal:

$ ollama create financial -f financial

This created the unique LLM based on Llama3.2 to perform the financial analysis. I made sure that the csv file from my financial institution was in the same directory as I was currently operating. This is important and then entered the following command to pull the csv file into the custom LLM.

ollama run financial:latest "$(cat data.csv)", Summarize my transactions. 

This gave me a complete summary of the debits and credits that were included in the small csv file. I have encountered some errors and I plan to keep working with the model and reading. I’m encoueraged by the results.