Everyone’s building MCP servers these days. Open Twitter—sorry, X—and you’ll see a new one announced every few hours. Database MCPs. Browser automation MCPs. GitHub MCPs. There’s probably an MCP for ordering pizza by now.
I tried a few. I really did. And then I ripped them all out.
The Context Tax Nobody Talks About
Here’s the thing about MCPs that nobody seems to mention: they’re expensive. Not in dollars—in tokens. In context.
Every MCP you add to your setup injects its tool definitions into the context window. That’s unavoidable. But how much? I’ve seen GitHub’s MCP eat up around 23,000 tokens. The Playwright MCP? About 13,700 tokens. Chrome DevTools? 18,000. And these numbers stack up fast.
Let’s do some math. A typical model gives you 200k tokens to work with. Sound like a lot? It is—until you’ve blown 50k on tool definitions before writing a single line of code. Now you’ve got 150k left for actual work. That’s a 25% haircut just to have MCPs sitting there, waiting to be called.
And here’s the kicker: most of the time, you’re not even using all those tools. You’ve got 30 methods for browser automation loaded into memory when all you needed was “screenshot this page.”
What CLIs Do Better
I’ve been using AI coding agents for a while now—Claude Code, Cursor, Codex, you name it. And you know what they’re all exceptionally good at? Running shell commands. That’s kind of their bread and butter.
When I need to interact with GitHub, I don’t reach for an MCP. I just use gh. When I need to deploy something, I use vercel. Database queries? psql. These tools already exist, they’re battle-tested, and every modern AI model already knows how to use them.
The token cost? Zero. Well, close to zero—there’s the output from running the command, but that’s actual information I asked for, not a 10,000-token manifest of methods I might someday want to call.
Here’s a real example. I needed the agent to check the status of a pull request, review the changes, and add a comment. With the GitHub MCP, that’s three tool calls through a specialized protocol, each with its own overhead. With gh, it’s:
gh pr view 123
gh pr diff 123
gh pr comment 123 --body "Looks good, ship it"
Three shell commands. Done. No special training on a custom tool definition needed. The agent just used the CLI it already understands.
The Session Problem
One legitimate criticism of CLIs is session management. Armin Ronacher makes this point well—doing multi-turn interactions with CLI tools can be awkward because you need to teach the agent how to manage state between calls.
But here’s my counterpoint: most of the time, I don’t need sessions. I need atomic operations. Read this file. Run this command. Deploy this thing. The stateless nature of CLIs isn’t a bug—it’s a feature. It keeps things predictable.
When I genuinely need stateful interaction—debugging with lldb, for instance—I spin up a tmux session. The AI attaches, does its work, and I can inspect the state any time I want. No MCP required.
Composability Through Simplicity
There’s something elegant about a workflow where the AI just runs commands and reads their output. Everything composes naturally because everything is text.
Need to save the output of a command for later? Redirect it to a file. Need to chain operations? Use pipes. Need to parse JSON? jq is right there. The Unix philosophy has been solving these problems for 50 years.
MCPs, by contrast, create their own little universes. The output of one MCP tool goes through the context window before it can reach another tool or get saved anywhere. That’s a roundtrip through the AI’s brain for every intermediate result. It’s not just inefficient—it’s fragile.
I watched an MCP-based workflow fail spectacularly once because the AI hit its context limit mid-operation. All that state, gone. With CLIs, if something fails, I’ve got artifacts on disk. I can pick up where I left off.
But What About Discovery?
One argument for MCPs is discoverability. The tool definitions tell the AI exactly what’s available. Fair point.
But modern LLMs already know what common CLIs do. They were trained on mountains of documentation, Stack Overflow answers, and blog posts about these tools. When I ask an agent to use gh, it doesn’t need a schema—it knows the commands because it’s read the man pages (figuratively speaking).
For specialized tools, a simple section in my AGENTS.md does the job. I’ve documented my custom utilities in maybe 200 tokens total. That’s a rounding error compared to the cost of loading an MCP.
The Real Problem MCPs Solve
I don’t want to be completely dismissive. MCPs do solve a real problem: AI agents interacting with systems that don’t have good CLI interfaces. Some APIs are only accessible over HTTP. Some workflows genuinely need complex state management.
But for developer tooling? For interacting with services that were designed by developers, for developers? Most of these already have excellent CLIs. We don’t need to reinvent the wheel.
The rush to build MCPs feels like a solution in search of a problem. “Look, we can expose this API to AI agents!” Sure. But could you also just write a bash script? Probably. And the bash script would work in 10 years when MCP is a historical footnote.
My Setup
Here’s what I actually use, documented in my AGENTS.md:
ghfor GitHubvercelfor deploymentspsqlfor database workqueuestackfor task trackingdocset2mdfor offline API documentation- A handful of custom scripts for project-specific needs
Total context overhead: negligible. Total functionality: everything I need.
I had one MCP configured for a while—I forget which one—and eventually removed it. Not because it didn’t work, but because the agent could accomplish the same tasks faster by reading code directly or running a CLI command. The MCP was just… in the way.
The Takeaway
If you’re deep in the MCP ecosystem and it’s working for you, great. Keep doing what works.
But if you’re considering adding MCPs to your workflow, ask yourself: is there a CLI that already does this? Because there probably is. And it probably does it better, faster, and cheaper than any MCP will.
The best tool is often the one that’s been quietly solving your problem for years. You just need to tell your AI about it.
And if you can’t find a CLI that does what you need? Build one. It’s easier than you think.