background image of night city

Why I Built Spec Shaver

March 06, 2026 · 7 min read

The Problem

I ran into this issue more times than I’d like to admit: trying to set up custom actions in OpenAI’s Custom GPTs, only to hit their frustrating limitations. Your OpenAPI spec needs to be under 1MB and can’t have more than 30 operations. Sounds reasonable until you’re working with a real API that has 150+ endpoints and a 3.5MB spec file.

The first time this happened, I manually went through the spec and deleted endpoints I thought we wouldn’t need. Took forever, broke the schema validation, and I had to do it all over again when the API updated. There had to be a better way.

What Was Already Out There

I looked around for existing tools. There are plenty of OpenAPI validators, converters, and documentation generators, but nothing that actually intelligently reduces a spec. Most tools either:

  • Strip out all the documentation (making the spec useless for GPT actions)
  • Remove random endpoints without considering what’s actually important
  • Break the schema by not resolving $ref dependencies properly
  • Require manual configuration that’s just as tedious as doing it by hand

I needed something that would:

  1. Automatically pick the most important endpoints
  2. Keep the schema valid
  3. Stay under the size limit
  4. Let me override the automatic selection when needed

The Solution

Spec Shaver does exactly that. It prioritizes endpoints based on what actually matters:

  • Core entity operations (users, accounts, projects, etc.)
  • HTTP method importance (GET > POST > PATCH > DELETE)
  • Endpoint type (collection vs single resource)
  • Documentation quality

Run it on your 3.5MB spec with 150 operations, and you get a clean 850KB file with the 30 most relevant endpoints. The schema stays valid because it automatically resolves all the $ref dependencies and includes only the schemas you actually need.

Real-World Usage

Here’s how I use it:

Quick Reduction

When I just need to get under the limits fast:

Bash
spec-shaver reduce --input openapi.json --output gpt-actions.json

Done. Takes about a second, and I’ve got a spec that works with Custom GPTs.

Custom Selection

When I need specific endpoints:

Bash
spec-shaver wizard --input openapi.json

The wizard lets me pick exactly which operations to keep. I can select by groups (like “users” or “projects”) or choose individual endpoints. If I mess up, I can go back and change my selections without starting over.

Team Configuration

For projects where multiple people need to reduce the same spec consistently:

Bash
# Create a config file once
spec-shaver init

# Edit .spec-shaver.json to match your needs
# Then everyone just runs:
spec-shaver reduce --input openapi.json

Everyone gets the same reduced spec without having to remember all the CLI flags.

What Makes It Different

Smart Prioritization

It doesn’t just randomly pick 30 endpoints. It scores each operation based on multiple factors and picks the ones that matter most. You can customize which entities are considered “core” for your API.

Schema Validation

It validates both the input and output schemas. If something’s wrong with your reduced spec, you’ll know immediately instead of finding out when you try to use it.

Actually Usable Output

The reduced spec isn’t just smaller; it’s still a complete, valid OpenAPI document. All the descriptions, examples (if you want them), and schema definitions are intact. GPT can actually understand what the endpoints do.

Installation

Bash
npm install -g spec-shaver

Or use it without installing:

Bash
npx spec-shaver reduce --input openapi.json

Advanced Options

Verbose Mode

See what’s happening under the hood:

Bash
spec-shaver -v reduce --input openapi.json

Useful for debugging or understanding why certain endpoints were selected.

Custom Entity Priorities

Tell it which entities matter for your API:

Bash
spec-shaver reduce \
  --input openapi.json \
  --actions 40 \
  --core-entities users,teams,projects,tasks

Size Limits

Adjust the size limit:

Bash
spec-shaver reduce \
  --input openapi.json \
  --size 2097152  # 2MB

Include Examples

Keep example values in the schema:

Bash
spec-shaver reduce --input openapi.json --include-examples

Common Scenarios

OpenAI Custom GPT Actions

The original use case:

Bash
# Quick reduction to meet GPT limits
spec-shaver reduce --input api-spec.json --output gpt-actions.json

# Verify it's under the limits
ls -lh gpt-actions.json

Upload gpt-actions.json to your Custom GPT and you’re done.

SDK Generation

Generate a lightweight SDK for core operations:

Bash
spec-shaver reduce \
  --input full-api.json \
  --output sdk-spec.json \
  --actions 50 \
  --core-entities users,auth,billing

API Documentation

Create focused documentation for specific use cases:

Bash
# Public API docs - only GET endpoints
spec-shaver wizard --input internal-api.json --output public-api.json
# Then select only the GET operations you want to expose

Testing

Generate a manageable test schema:

Bash
spec-shaver reduce \
  --input production-api.json \
  --output test-schema.json \
  --actions 20

Programmatic Usage

You can also use Spec Shaver as a library:

TypeScript
import { OpenAPIReducer } from 'spec-shaver';
import * as fs from 'fs';

const schema = JSON.parse(fs.readFileSync('openapi.json', 'utf8'));

const reducer = new OpenAPIReducer({
  maxActions: 30,
  maxSizeBytes: 1024 * 1024,
  coreEntities: ['users', 'projects'],
});

const result = reducer.reduce(schema);

console.log(`Reduced from ${result.originalOperationCount} to ${result.reducedOperationCount} operations`);
console.log(`Size: ${(result.sizeBytes / 1024).toFixed(1)} KB`);

fs.writeFileSync('reduced.json', JSON.stringify(result.schema, null, 2));

What’s Next

I’ve got a roadmap of features I want to add:

  • Schema merging: Combine multiple API specs before reducing
  • Operation search: Filter operations by keyword in the wizard
  • Dry-run mode: Preview what would be reduced without writing files
  • YAML support: Work with YAML specs directly
  • Custom scoring: Define your own prioritization logic

Check out the full roadmap for details.

Contributing

This tool scratches my itch, but I’m sure there are use cases I haven’t thought of. If you’ve got ideas, found bugs, or want to contribute:

  • Issues: GitHub Issues
  • Pull Requests: Always welcome
  • Discussions: Share your use cases and suggestions

The codebase is TypeScript, well-documented, and has clear contribution guidelines in CONTRIBUTING.md.

Feedback

I built this because I needed it. If you find it useful (or if it doesn’t quite work for your use case), I’d love to hear about it. Open an issue on GitHub or start a discussion.

The more feedback I get, the better I can make this tool work for everyone dealing with oversized OpenAPI specs.


GitHub: github.com/gfargo/spec-shaver
NPM: npmjs.com/package/spec-shaver
License: MIT

ShareXLinkedInEmail
Written By
Griffen Fargo headshot

Griffen Fargo

Published

Keep Reading
Fin.

griffen.codes

made with 💖 and

© 2026all rights reservedupdated 11 seconds ago