How to easily get a list of your local Ollama models within your application

February 04, 2025

How to easily get a list of your local Ollama models within your application featured image

When you’re deep in the code jungle working with local language models via Ollama, you sometimes need a quick way to see what models you have hanging out on your machine. Whether it’s for giving users a nifty dropdown list, checking input validation, or just satisfying your inner geek while authoring a fresh custom alias for your shell, having a command that spits out a clean list is has proven helpful for me multiple times

In this post, we kick things off with a handy bash one-liner that filters out those pesky headers. Then, we explore variations in Python (using both the trusty subprocess module and the official Ollama Python client), Node.js, Go and Rust. Plus, we’ll share some real-world examples from projects like coco and augments to show you how the pros do it.

Note: This assumes you already have Ollama downloaded on your local machine

Bash One-Liner

This one-liner is your quick and dirty tool to list all your locally downloaded Ollama models:

Bash
ollama list | awk '{print $1}' | awk '{if(NR>1)print}'

How It Works

  1. ollama list
    Lists all locally downloaded models with a header line (because even computers love a good title).
  2. awk '{print $1}'
    Extracts just the first column, which is usually the model names.
  3. awk '{if(NR>1)print}'
    Skips that first header line—no need for extra fluff.

Python

Using the Subprocess Module

If Python is more your vibe, you can use the built-in subprocess module to run that bash command and parse its output:

import subprocess

# Execute the bash one-liner command
result = subprocess.run(
    "ollama list | awk '{print $1}' | awk '{if(NR>1)print}'",
    shell=True,
    capture_output=True,
    text=True
)

# Split the output into non-empty lines
model_names = [line for line in result.stdout.splitlines() if line.strip()]

if not model_names:
    print("No Ollama models found. Time to install some via the Ollama CLI!")
    exit(1)

# Create a list of model objects for further processing
available_models = [{"name": model, "value": model} for model in model_names]
print("Available Models:", available_models)
Python

This snippet runs our bash magic, cleans up the output, and gives you a nice list to work with. It’s like having your very own digital record collection, but for AI models.

Using the Official Ollama Python Client

For those riding the official Ollama Python client wave, you can often use built-in methods to list your models without shelling out to the command line. (Heads up: method names might change, so check the docs if things get funky.)

import ollama  # Make sure to install the official client via pip

# Initialize the Ollama client with your base URL
client = ollama.Client(base_url="http://localhost:11434")

try:
    # Hypothetically, this method returns a list of model names
    model_list = client.list_models()
except Exception as e:
    print("Error retrieving models:", e)
    exit(1)

if not model_list:
    print("No Ollama models found. Better pull one from the CLI, man!")
    exit(1)

available_models = [{"name": model, "value": model} for model in model_list]
print("Available Models:", available_models)
Python

This approach is a bit more “Zen” since it avoids shell commands and uses the client’s native abilities to do the heavy lifting.

Node.js

For the JavaScript enthusiasts out there, here’s how you can do it with Node.js using the child_process module:

const { exec } = require('child_process');

exec("ollama list | awk '{print $1}' | awk '{if(NR>1)print}'", (error, stdout, stderr) => {
  if (error) {
    console.error(`Error executing command: ${error.message}`);
    process.exit(1);
  }
  const modelNames = stdout.split('\n').filter(line => line.trim());
  if (modelNames.length === 0) {
    console.log('No Ollama models found. Time to install some via the Ollama CLI!');
    process.exit(1);
  }
  const availableModels = modelNames.map(model => ({ name: model, value: model }));
  console.log("Available Models:", availableModels);
});
JavaScript

This Node.js snippet is straightforward, running the bash one-liner and processing the output into a neat array of models.

Go

For those who prefer Go (or just want to see how another language handles it), here’s an example using Go’s os/exec package:

package main

import (
	"fmt"
	"log"
	"os/exec"
	"strings"
)

func main() {
	// Run the command
	cmd := exec.Command("bash", "-c", "ollama list | awk '{print $1}' | awk '{if(NR>1)print}'")
	output, err := cmd.Output()
	if err != nil {
		log.Fatalf("Command execution failed: %v", err)
	}

	// Process output: split by lines and filter empty strings
	lines := strings.Split(string(output), "\n")
	var modelNames []string
	for _, line := range lines {
		if strings.TrimSpace(line) != "" {
			modelNames = append(modelNames, strings.TrimSpace(line))
		}
	}

	if len(modelNames) == 0 {
		fmt.Println("No Ollama models found. Please install one via the Ollama CLI.")
		return
	}

	// Print the available models
	fmt.Println("Available Models:")
	for _, model := range modelNames {
		fmt.Printf("- %s\n", model)
	}
}
Go

This Go program executes the bash command, splits the output, filters out any empty lines, and then prints a tidy list of your models.

Rust

This snippet uses Rust’s standard library to spawn a shell command and print each model:

use std::process::Command;

fn main() {
    // Execute the bash command that lists models, extracts the first column, and skips the header.
    let output = Command::new("bash")
        .arg("-c")
        .arg("ollama list | awk '{print $1}' | awk '{if(NR>1)print}'")
        .output()
        .expect("Failed to execute command");

    // Check if the command was successful
    if !output.status.success() {
        eprintln!("Error: {}", String::from_utf8_lossy(&output.stderr));
        std::process::exit(1);
    }

    // Convert the output bytes to a String and split into lines
    let stdout = String::from_utf8_lossy(&output.stdout);
    let models: Vec<&str> = stdout
        .lines()
        .map(|line| line.trim())
        .filter(|line| !line.is_empty())
        .collect();

    // Handle the case when no models are found
    if models.is_empty() {
        println!("No Ollama models found. Please install one via the Ollama CLI.");
    } else {
        println!("Available Models:");
        for model in models {
            println!("- {}", model);
        }
    }
}
Rust

Real-World Usage and Integration

Practical projects like coco and augments offer excellent examples of how these techniques are applied in real-world scenarios. In the coco project, the bash one-liner is used during the initialization process to retrieve a clean list of available models. This enables the system to present users with an up-to-date model selection—vital for features such as generating commit messages, creating changelogs, and summarizing code changes. By integrating this command into the setup routine, coco ensures that developers have a seamless experience when configuring their AI-powered Git assistant.

Similarly, the augments project incorporates a Python module (specifically in the llm.py file) that leverages the same underlying idea. Here, the script checks for locally available Ollama models and, if a required model isn’t present, provides clear feedback along with instructions on how to pull the model using the Ollama CLI. This not only improves the overall robustness of the application but also enhances user experience by reducing runtime errors related to missing dependencies.


So here’s the general take-away…

Whether you prefer a quick bash command or a more integrated approach in Python, Node.js, or Go, these examples provide a comprehensive guide on how to list your local Ollama models. Adapting these snippets to your development environment will streamline model management and ensure your applications are always running with the latest and greatest models.

Projects like coco and augments are real-world testaments to the power of these techniques, proving how helpful simple one-liners can be towards improving the developer experience 🧙‍♂️✨

headshot photo

Published on

February 04, 2025

griffen.codes

made with 💖 and

2025 © all rights are reserved | updated 12 seconds ago

Footer Background Image