Augments

Published on February 08, 2025

Augments featured image

AI-Powered CLI Functions for Information Processing

Revolutionize the way I handle information with an AI-driven command-line toolbelt. Whether you’re investigating YouTube videos, scrutinizing articles, or managing clipboard content, Augments is designed to streamline your daily digital tasks, enabling you to achieve your objectives more swiftly. How can we draw insights more efficiently and deliver a tool that sustains your flow state in problem-solving without being overwhelmed by decision-making or data-processing fatigue?

  • Condense learning about the new Next App router in those 4 YouTube videos you found in under ten minutes
  • Empower myself when building out a personal resource library like https://obsidian.md/ 🧙‍♂️🪄

Read more about the creation in the following blog post; Building Augments: Cybernetic Enhancements for the Information Age.

Latest Updates

🤖 January 2024: Local AI with Ollama

Bringing AI capabilities offline! Now you can run powerful language models locally using Ollama integration. Choose the right model for each task – from quick summaries to deep code analysis.

Python
from augments.lib.llm import OllamaClient, ModelType

# Pick the best model for your task
client = OllamaClient(model=ModelType.CODE.value)  # Optimized for code
client.generate("Explain this function...")

# Or use environment variables for configuration
export OLLAMA_DEFAULT_MODEL=mistral  # Fast model for quick tasks
Python

Want to see what’s happening under the hood? Enable debug mode to see detailed information about available models and their capabilities:

Bash
🔍 Debug Information:
    OLLAMA_DEFAULT_MODEL: codellama
    PYTHONPATH: /workspace
   
Available Models:
    codellama    28f8fd6cdc67    4.9 GB    4 days ago
    llama2       fe938a131f40    3.8 GB    14 months ago
    mistral      d364aa8d131e    4.1 GB    14 months ago

🤖 Using model: codellama (Specialized for code)

🎨 Is it still loading…?

I got tired of wondering if the process had stalled out given frequent long wait times, so every command now features elegant progress indicators that keep you informed:

🌒 Initializing YouTube Wisdom...
 Fetching video metadata...
 Downloading transcript...
📽️ Processing: How to Build a Neural Network
 Generating summary...
 Extracting key insights...
 Finding referenced resources...
 Processing complete!

Different indicators for different tasks help you track parallel operations at a glance. Whether it’s downloading content, processing text, or generating summaries, you’ll always know what’s happening.

Bring Your Own Commands

Have a specific workflow you’d like to enhance? Augments makes it easy to create your own AI-powered commands. Here’s how:

1. Generate Your Command

One line to scaffold a new command:

Bash
./create_command.sh myNewCommand

This creates a new Python script with all the boilerplate you need:

scripts/
└── my_new_command.py  # Your new command

2. Add Your Logic

The command template comes with everything you need to get started:

#!/usr/bin/env python3
"""
Command: myNewCommand
Process content your way!
"""

import argparse
from augments.lib.utils import get_desktop_path
from augments.lib.progress import track_progress, LoaderStyle
from augments.lib.llm import OllamaClient

def main():
    # Parse command line arguments
    parser = argparse.ArgumentParser()
    parser.add_argument("input", help="Content to process")
    args = parser.parse_args()

    # Process with progress indicator
    with track_progress("Processing", LoaderStyle.DOTS):
        result = process_input(args.input)

    print("✨ Done!")

if __name__ == "__main__":
    main()
Python

3. Use Built-in Tools

Augments provides powerful utilities to handle common tasks:

# Show progress beautifully
with track_progress("Analyzing", LoaderStyle.PULSE):
    results = analyze_content()

# Process with AI
client = OllamaClient()
insights = client.generate("Explain this concept...")

# Save output nicely
path = get_desktop_path("analysis.md")
with open(path, "w") as f:
    f.write(format_results(insights))
Python

4. Real Example: RSS Feed Analyzer

Here’s a practical example – a command that summarizes RSS feeds using AI:

from augments.lib.llm import OllamaClient
from augments.lib.progress import track_progress, LoaderStyle
import feedparser

def process_feed(url: str):
    # Fetch the feed with progress indicator
    with track_progress(f"Reading {url}", LoaderStyle.DOTS):
        feed = feedparser.parse(url)
    
    # Use a fast model for quick summaries
    client = OllamaClient(model="mistral")
    summaries = []
    
    # Process recent entries
    for entry in feed.entries[:5]:
        with track_progress(f"Summarizing: {entry.title}", LoaderStyle.PULSE):
            summary = client.generate(f"TLDR: {entry.description}")
            summaries.append((entry.title, summary))
    
    return summaries
Python

This command takes an RSS feed URL and returns AI-generated summaries of recent posts. Run it with:

rssWisdom "https://example.com/feed.xml"

[Screenshot: RSS Wisdom summarizing a tech blog feed]

Get Started in Minutes

  1. Clone and install:
Bash
git clone https://github.com/username/augments.git
cd augments
./install.sh --shell zsh  # or bash
  1. Set up your AI preferences:
# Choose your preferred AI model
export OLLAMA_DEFAULT_MODEL=codellama  # for code analysis
# or
export OLLAMA_DEFAULT_MODEL=mistral    # for quick summaries
  1. Start processing content:
Bash
# Analyze a YouTube video
youtubeWisdom "https://youtube.com/watch?v=..."

# Process clipboard content
clipboardAnalyze

# Or try your own commands!
./create_command.sh myCommand

Configuration Options

Customize Augments to fit your workflow:

# AI Settings
AUGMENTS_DEBUG=1              # See what's happening under the hood
OLLAMA_DEFAULT_MODEL=llama2   # Your go-to AI model
OPENAI_API_KEY=sk-...         # Optional: Use OpenAI when needed

# Output Settings
DESKTOP_PATH=/custom/path     # Where to save processed content

Contributing

Want to help build the future of information processing? Here’s how to get involved:

Screenshots

Return to Projects Page

griffen.codes

made with 💖 and

2025 © all rights are reserved | updated 11 seconds ago

Footer Background Image