Lately, I’ve been rethinking how I consume and retain information. Like many developers, I find myself drowning in a backlog of tutorials, release updates, and deep-dive explainer videos. The rise of YouTube as a primary learning source means I often have multiple 20-minute videos queued up, hoping to catch up when I find “downtime.”
The reality? Many of these articles and videos sit in my backlog, barely glanced at. It feels wasteful not to leverage better tools to optimize my learning. That’s what led me back to Fabric—a tool I had installed long ago but never fully integrated into my workflow.
Fabric is an AI-powered tool designed to integrate seamlessly into daily workflows, enhancing how we process and interact with information. Since early 2023, its development has focused on practical AI applications for real-world tasks—breaking down complex problems into manageable, automated components.
A key feature of Fabric is Patterns, which are collections of AI-powered prompts designed for various tasks, such as:
With Fabric, managing and leveraging these prompts becomes effortless, allowing users to fine-tune their workflow automation.
At the moment, I’ve only scratched the surface of Fabric’s potential, primarily using a few pre-baked Patterns. However, I see immense potential in composing these AI-driven tools to streamline redundant, slow tasks.
Since productivity bottlenecks are unique to each individual, it makes sense that we must define our own augmentations—akin to how developers have long shared their .dotfiles
to fine-tune their environments. Fabric feels like a similar evolution, but for personal computing.
My initial approach was lightweight—using simple Bash functions to extract key insights from YouTube videos and clipboard content.
# Extract wisdom from a youtube video
ewYoutube() {
local url="$1"
yt --transcript "$url" | fabric --copy -p extract_wisdom
}
alias ewYoutube=ewYoutube
➜ ewYoutube https://www.youtube.com/watch\?v=Y2mDwW2pMv4 | say
I started by leveraging macOS’s built-in say
command to read the extracted text aloud. While functional, it wasn’t exactly pleasant to listen to for extended periods. This led me to explore better text-to-speech options.
# Extract wisdom from a youtube video
ytEw() {
local url="$1"
yt --transcript "$url" | fabric --copy -p extract_wisdom
}
alias ytEw=ytEw
ytEwAudio() {
local url="$1"
local videoId=$(echo "$url" | sed -n 's/.*v=\(.*\)/\1/p')
yt --transcript "$url" |
fabric -p extract_wisdom |
tts $videoId-wisdom.mp3 --service gcp --speed 1.1 --voice en-US-Standard-I
}
alias ytEwAudio=ytEwAudio
# Extract wisdom from clipboard
ewClip() {
pbpaste | fabric --pattern extract_wisdom
}
alias ewClip=ewClip
summarizeClip() {
pbpaste | fabric --pattern summarize
}
alias summarizeClip=summarizeClip
ytSummarize() {
local url="$1"
yt --transcript "$url" | fabric --copy -p summarize
}
alias ytSummarize=ytSummarize
ytSummarizeAudio() {
local url="$1"
# extract video ID from URL
local videoId=$(echo "$url" | sed -n 's/.*v=\(.*\)/\1/p')
yt --transcript "$url" |
fabric -p summarize |
tts $videoId-summary.mp3 --service gcp --speed 1.1 --voice en-US-Standard-I
}
alias ytSummarizeAudio=ytSummarizeAudio
# TODO: Better composability
# Desire:
# - Get Audio Overview
# - Get List of Important Links / References
# - Important Code Snippets
# - Randomize voice for different feel between stories
The next step was integrating more advanced TTS (Text-to-Speech) solutions:
With Google’s TTS, my aliases evolved to store summaries as .mp3
files, making them easier to reference and replay.
➜ ytSummarizeAudio https://www.youtube.com/watch?v=Y2mDwW2pMv4
This system worked well for generating digestible audio summaries, but it had one glaring flaw—once converted to speech, the text-based insights were lost. This realization led me further down the rabbit hole.
As my workflow evolved, I needed more than just spoken summaries. I wanted a structured, persistent knowledge repository where I could:
This led me to experiment with Python subprocesses to compose and chain these functions more effectively. At the same time, I started exploring Obsidian, a powerful Markdown-based knowledge management tool. Creating a well-structured library of Markdown documents felt like the perfect way to extend my learning system into something more permanent and referenceable.
import subprocess
import random
import os
import pyperclip # For clipboard interaction
from common_defs import get_random_voice # Assuming common_defs.py exists
def run_fabric_pattern(text, pattern):
"""Runs a Fabric pattern."""
try:
result = subprocess.run(["fabric", "-p", pattern], input=text, capture_output=True, text=True, check=True)
return result.stdout
except subprocess.CalledProcessError as e:
print(f"Error running fabric pattern '{pattern}': {e}")
return None
def generate_tts(text, filename, voice="en-US-Standard-I", speed=1.1):
"""Generates TTS using gtts-cli."""
try:
subprocess.run(["gtts-cli", "--text", text, "--output", filename, "--voice", voice, "--speed", str(speed)], check=True)
except subprocess.CalledProcessError as e:
print(f"Error generating TTS: {e}")
def create_markdown(title, summary, wisdom, links, audio_file):
"""Creates a Markdown document."""
return f"""
# Analysis of: {title}
## Summary
{summary or "No summary available."}
## Key Wisdom
{wisdom or "No key wisdom extracted."}
## Links/References
{links or "No links found."}
## Audio Summary
[Listen to the summary]({audio_file})
"""
def main():
text = pyperclip.paste() # Get text from the clipboard
if not text:
print("Clipboard is empty.")
return
title = input("Enter a title (or press Enter to auto-detect): ") or "Clipboard Content"
summary = run_fabric_pattern(text, "summarize")
wisdom = run_fabric_pattern(text, "extract_wisdom")
links = run_fabric_pattern(text, "extract_links")
voice = get_random_voice()
audio_file = f"{title.replace(' ', '_')}-analysis.mp3"
if summary:
generate_tts(summary, audio_file, voice)
markdown_output = create_markdown(title, summary, wisdom, links, audio_file)
output_filename = os.path.expanduser(f"~/Desktop/{title.replace(' ', '_')}-analysis.md")
with open(output_filename, "w") as f:
f.write(markdown_output)
print(f"Markdown output written to: {output_filename}")
if __name__ == "__main__":
main()
PythonBy integrating Python’s subprocess
module, I’m aiming to create a fully automated and structured knowledge repository. The next step is refining the indexing process so that each captured insight is stored logically for later retrieval. Since I’ve been exploring Obsidian, this feels like the perfect opportunity to link these Markdown-based summaries into a more expansive, interlinked knowledge base.
Overall this exploration has been incredibly exciting, and I’m curious to see how I can continue to optimize my learning by leveraging AI and automation.