
May 02, 20256 min read
I ran into this issue more times than I’d like to admit: trying to set up custom actions in OpenAI’s Custom GPTs, only to hit their frustrating limitations. Your OpenAPI spec needs to be under 1MB and can’t have more than 30 operations. Sounds reasonable until you’re working with a real API that has 150+ endpoints and a 3.5MB spec file.
The first time this happened, I manually went through the spec and deleted endpoints I thought we wouldn’t need. Took forever, broke the schema validation, and I had to do it all over again when the API updated. There had to be a better way.
I looked around for existing tools. There are plenty of OpenAPI validators, converters, and documentation generators, but nothing that actually intelligently reduces a spec. Most tools either:
$ref dependencies properlyI needed something that would:
Spec Shaver does exactly that. It prioritizes endpoints based on what actually matters:
Run it on your 3.5MB spec with 150 operations, and you get a clean 850KB file with the 30 most relevant endpoints. The schema stays valid because it automatically resolves all the $ref dependencies and includes only the schemas you actually need.
Here’s how I use it:
When I just need to get under the limits fast:
spec-shaver reduce --input openapi.json --output gpt-actions.jsonDone. Takes about a second, and I’ve got a spec that works with Custom GPTs.
When I need specific endpoints:
spec-shaver wizard --input openapi.jsonThe wizard lets me pick exactly which operations to keep. I can select by groups (like “users” or “projects”) or choose individual endpoints. If I mess up, I can go back and change my selections without starting over.
For projects where multiple people need to reduce the same spec consistently:
# Create a config file once
spec-shaver init
# Edit .spec-shaver.json to match your needs
# Then everyone just runs:
spec-shaver reduce --input openapi.jsonEveryone gets the same reduced spec without having to remember all the CLI flags.
It doesn’t just randomly pick 30 endpoints. It scores each operation based on multiple factors and picks the ones that matter most. You can customize which entities are considered “core” for your API.
It validates both the input and output schemas. If something’s wrong with your reduced spec, you’ll know immediately instead of finding out when you try to use it.
The reduced spec isn’t just smaller; it’s still a complete, valid OpenAPI document. All the descriptions, examples (if you want them), and schema definitions are intact. GPT can actually understand what the endpoints do.
npm install -g spec-shaverOr use it without installing:
npx spec-shaver reduce --input openapi.jsonSee what’s happening under the hood:
spec-shaver -v reduce --input openapi.jsonUseful for debugging or understanding why certain endpoints were selected.
Tell it which entities matter for your API:
spec-shaver reduce \
--input openapi.json \
--actions 40 \
--core-entities users,teams,projects,tasksAdjust the size limit:
spec-shaver reduce \
--input openapi.json \
--size 2097152 # 2MBKeep example values in the schema:
spec-shaver reduce --input openapi.json --include-examplesThe original use case:
# Quick reduction to meet GPT limits
spec-shaver reduce --input api-spec.json --output gpt-actions.json
# Verify it's under the limits
ls -lh gpt-actions.jsonUpload gpt-actions.json to your Custom GPT and you’re done.
Generate a lightweight SDK for core operations:
spec-shaver reduce \
--input full-api.json \
--output sdk-spec.json \
--actions 50 \
--core-entities users,auth,billingCreate focused documentation for specific use cases:
# Public API docs - only GET endpoints
spec-shaver wizard --input internal-api.json --output public-api.json
# Then select only the GET operations you want to exposeGenerate a manageable test schema:
spec-shaver reduce \
--input production-api.json \
--output test-schema.json \
--actions 20
You can also use Spec Shaver as a library:
import { OpenAPIReducer } from 'spec-shaver';
import * as fs from 'fs';
const schema = JSON.parse(fs.readFileSync('openapi.json', 'utf8'));
const reducer = new OpenAPIReducer({
maxActions: 30,
maxSizeBytes: 1024 * 1024,
coreEntities: ['users', 'projects'],
});
const result = reducer.reduce(schema);
console.log(`Reduced from ${result.originalOperationCount} to ${result.reducedOperationCount} operations`);
console.log(`Size: ${(result.sizeBytes / 1024).toFixed(1)} KB`);
fs.writeFileSync('reduced.json', JSON.stringify(result.schema, null, 2));
I’ve got a roadmap of features I want to add:
Check out the full roadmap for details.
This tool scratches my itch, but I’m sure there are use cases I haven’t thought of. If you’ve got ideas, found bugs, or want to contribute:
The codebase is TypeScript, well-documented, and has clear contribution guidelines in CONTRIBUTING.md.
I built this because I needed it. If you find it useful (or if it doesn’t quite work for your use case), I’d love to hear about it. Open an issue on GitHub or start a discussion.
The more feedback I get, the better I can make this tool work for everyone dealing with oversized OpenAPI specs.
GitHub: github.com/gfargo/spec-shaver
NPM: npmjs.com/package/spec-shaver
License: MIT
Griffen Fargo
Published
