Revolutionize the way I handle information with an AI-driven command-line toolbelt. Whether you’re investigating YouTube videos, scrutinizing articles, or managing clipboard content, Augments is designed to streamline your daily digital tasks, enabling you to achieve your objectives more swiftly. How can we draw insights more efficiently and deliver a tool that sustains your flow state in problem-solving without being overwhelmed by decision-making or data-processing fatigue?
Read more about the creation in the following blog post; Building Augments: Cybernetic Enhancements for the Information Age.
Bringing AI capabilities offline! Now you can run powerful language models locally using Ollama integration. Choose the right model for each task – from quick summaries to deep code analysis.
from augments.lib.llm import OllamaClient, ModelType
# Pick the best model for your task
client = OllamaClient(model=ModelType.CODE.value) # Optimized for code
client.generate("Explain this function...")
# Or use environment variables for configuration
export OLLAMA_DEFAULT_MODEL=mistral # Fast model for quick tasks
PythonWant to see what’s happening under the hood? Enable debug mode to see detailed information about available models and their capabilities:
🔍 Debug Information:
• OLLAMA_DEFAULT_MODEL: codellama
• PYTHONPATH: /workspace
Available Models:
• codellama 28f8fd6cdc67 4.9 GB 4 days ago
• llama2 fe938a131f40 3.8 GB 14 months ago
• mistral d364aa8d131e 4.1 GB 14 months ago
🤖 Using model: codellama (Specialized for code)
I got tired of wondering if the process had stalled out given frequent long wait times, so every command now features elegant progress indicators that keep you informed:
🌒 Initializing YouTube Wisdom...
⠋ Fetching video metadata...
◐ Downloading transcript...
📽️ Processing: How to Build a Neural Network
⠋ Generating summary...
◐ Extracting key insights...
◓ Finding referenced resources...
✨ Processing complete!
Different indicators for different tasks help you track parallel operations at a glance. Whether it’s downloading content, processing text, or generating summaries, you’ll always know what’s happening.
Have a specific workflow you’d like to enhance? Augments makes it easy to create your own AI-powered commands. Here’s how:
One line to scaffold a new command:
./create_command.sh myNewCommand
This creates a new Python script with all the boilerplate you need:
scripts/
└── my_new_command.py # Your new command
The command template comes with everything you need to get started:
#!/usr/bin/env python3
"""
Command: myNewCommand
Process content your way!
"""
import argparse
from augments.lib.utils import get_desktop_path
from augments.lib.progress import track_progress, LoaderStyle
from augments.lib.llm import OllamaClient
def main():
# Parse command line arguments
parser = argparse.ArgumentParser()
parser.add_argument("input", help="Content to process")
args = parser.parse_args()
# Process with progress indicator
with track_progress("Processing", LoaderStyle.DOTS):
result = process_input(args.input)
print("✨ Done!")
if __name__ == "__main__":
main()
PythonAugments provides powerful utilities to handle common tasks:
# Show progress beautifully
with track_progress("Analyzing", LoaderStyle.PULSE):
results = analyze_content()
# Process with AI
client = OllamaClient()
insights = client.generate("Explain this concept...")
# Save output nicely
path = get_desktop_path("analysis.md")
with open(path, "w") as f:
f.write(format_results(insights))
PythonHere’s a practical example – a command that summarizes RSS feeds using AI:
from augments.lib.llm import OllamaClient
from augments.lib.progress import track_progress, LoaderStyle
import feedparser
def process_feed(url: str):
# Fetch the feed with progress indicator
with track_progress(f"Reading {url}", LoaderStyle.DOTS):
feed = feedparser.parse(url)
# Use a fast model for quick summaries
client = OllamaClient(model="mistral")
summaries = []
# Process recent entries
for entry in feed.entries[:5]:
with track_progress(f"Summarizing: {entry.title}", LoaderStyle.PULSE):
summary = client.generate(f"TLDR: {entry.description}")
summaries.append((entry.title, summary))
return summaries
PythonThis command takes an RSS feed URL and returns AI-generated summaries of recent posts. Run it with:
rssWisdom "https://example.com/feed.xml"
[Screenshot: RSS Wisdom summarizing a tech blog feed]
git clone https://github.com/username/augments.git
cd augments
./install.sh --shell zsh # or bash
# Choose your preferred AI model
export OLLAMA_DEFAULT_MODEL=codellama # for code analysis
# or
export OLLAMA_DEFAULT_MODEL=mistral # for quick summaries
# Analyze a YouTube video
youtubeWisdom "https://youtube.com/watch?v=..."
# Process clipboard content
clipboardAnalyze
# Or try your own commands!
./create_command.sh myCommand
Customize Augments to fit your workflow:
# AI Settings
AUGMENTS_DEBUG=1 # See what's happening under the hood
OLLAMA_DEFAULT_MODEL=llama2 # Your go-to AI model
OPENAI_API_KEY=sk-... # Optional: Use OpenAI when needed
# Output Settings
DESKTOP_PATH=/custom/path # Where to save processed content
Want to help build the future of information processing? Here’s how to get involved: