It’s not, they are a big unlock when using something like cursor or copilot. I think people who say this don’t quite know what MCP is, it’s just a thin wrapper around an API that describes its endpoints as tools. How is there not a ton of value in this?
It's as you said: people misunderstand MCP and what it delivers.
If you only use it as an API? Useless. If you use it on a small solo project? Useless.
But if you want to share skills across a fleet of repos? Deliver standard prompts to baseline developer output and productivity? Without having to sync them? And have it updated live? MCP prompts.
If you want to share canonical docs like standard guidance on security and performance? Always up to date and available in every project from the start? No need to sync and update? MCP resources.
If you want standard telemetry and observability of usage? MCP because now you can emit and capture OTEL from the server side.
If you want to wire execution into sandboxed environments? MCP.
MCP makes sense for org-level agent engineering but doesn't make sense for the solo vibe coder working on an isolated codebase locally with no need to sandbox execution.
People are using MCP for the wrong use cases and then declaring them excess when the real use case is standardizing remote delivery and of skills and resources. Tool execution is secondary.
And yet without MCP these CLI generators wouldn't be possible.
It building on top of them, because MCP did address some issues (which arguably could've been solved better with clis to begin with - like adding proper help texts to each command)... it just also introduced new ones, too.
Some of which still won't be solved via switching back to CLI.
The obvious one being authentication and privileges.
By default, I want the LLM to be able to have full read only access. This is straightforward to solve with an MCP because the tools have specific names.
With CLI it's not as straightforward, because it'll start piping etc and the same CLI is often used both for write and read access.
All solvable issues, but while I suspect CLIs are going to get a lot more traction over the next few months, it's still not the thing we'll settle on- unless the privileges situation can be solved without making me greenlight commands every 2 seconds (or ignoring their tendency to occasionally go batshit insane and randomly wipe things out while running in yolo mode)
Exactly. Once you start looking at MCP as a protocol to access remote OAuth-protected resources, not an API for building agents, you realize the immense value
MCP only exists because there's no easy way for AI to run commands on servers.
Oh wait there's ssh. I guess it's because there's no way to tell AI agents what the tool does, or when to invoke it... Except that AI pretty much knows the syntax of all of the standard tools, even sed, jq, etc...
Yeah, ssh should've been the norm, but someone is getting promoted for inventing MCP
No it’s more like - because AI can’t know every endpoint and what it does, so MCP allows for injecting the endpoints and a description into context so the ai can choose the right tool without additions steps
They cannot? We have a client from 25 years ago and all the devops for them are massive bash scripts; 1000s of them. Not written by us (well some parts as maintenance) and really the only 'thing' that almost always flawlessly fixes and updates them is claude code. Even with insane bash in bash in bash escaping and all kinds of not well known constructs. It works. So we habe no incentive to refactor or rewrite. We did 5 years ago and postponed as we first had to rewrite their enormous and equally badly written ERP for their factory. Maybe that would not have happened either now...
One pattern we've been seeing internally is that once teams
standardize API interactions through a single interface
(or agent layer), debugging becomes both easier and harder.
Easier because there's a central abstraction,
harder because failures become more opaque.
In production incidents we often end up tracing through
multiple abstraction layers before finding the real root cause.
Curious if you've built anything into the CLI to help with
observability or tracing when something fails.
Tokens saved should not be your north star metric. You should be able to show that tool call performance is maintained while consuming fewer tokens. I have no idea whether that is the case here.
As an aside: this is a cool idea but the prose in the readme and the above post seem to be fully generated, so who knows whether it is actually true.
Token counts alone tell you nothing about correctness, latency, or developer ergonomics. Run a deterministic test suite that exercises representative MCP calls against both native MCP and mcp2cli while recording token usage, wall time, error rate, and output fidelity.
Measure fidelity with exact diffs and embedding similarity, and include streaming behavior, schema-change resilience, and rate-limit fallbacks in the cases you care about. Check the repo for a runnable benchmark, archived fixtures captured with vcrpy or WireMock, and a clear test harness that reproduces the claimed 96 to 99 percent savings.
It works by schematising the upstream and making data locally synchronised + a common query language, so the longer term goals are more about avoiding API limits / escaping the confines of the MCP query feature set - i.e. token savings on reading data itself (in many cases, savings can be upwards of thousands of times fewer tokens)
A few things: in this case, you have to provide the tool list in your prompt for the AI to know it exists. But you probably want the AI agent to be able to act and choose tools without you micromanaging and reminding it in every prompt, so then you'd need a tool list... and then you're back to providing the tool list automatically ala MCP again.
MCP can provide validation & verification of the request before making the API call. Giving the model a /tool/forecast URL doesn't prevent the model from deciding to instead explore what other tools might be available on the remote server instead, like deciding to try running /tool/imagegenerator or /tool/globalthermonuclearwar. MCP can gatekeep what the AI does, check that parameters are valid, etc.
Also, MCP can be used to do local computation, work with local files etc, things that web access wouldn't give you. CLI will work for some of those use cases too, but there is a maximum command line length limit, so you might struggle to write more than 8kB to a file when using the command line, for example. It can be easier to get MCP to work with binary files as well.
I tend to think of local MCP servers like DLLs, except the function calls are over stdio and use tons of wasteful JSON instead of being a direct C-function call. But thinking of where you might use a DLL and where you might call out to a CLI can be a useful way of thinking about the difference.
The point is authorization. With full web access, your agent can reach anything and leak anything.
You could restrict where it can go with domain allowlists but that has insufficient granularity. The same URL can serve a legitimate request or exfiltrate data depending on what's in the headers or payload: see https://embracethered.com/blog/posts/2025/claude-abusing-net...
So you need to restrict not only where the agent can reach, but what operations it can perform, with the host controlling credentials and parameters. That brings us to an MCP-like solution.
For me (actually trying to get shit done using this stuff) it's validation.
Being able to have a verifiable input/output structure is key. I suppose you can do that with a regular http api call (json) but where do you document the openapi/schema stuff? Oh yeah...something like mcp.
I agree that mcp isn't as refined as it should be, but when used properly it's better than having it burn thru tokens by scraping around web content.
the token math is compelling but I'm curious about the discovery step. with native MCP the host already knows what tools exist. with this, the agent has to run --list first, which means extra roundtrips. for 120 tools that might still be a net win, but the latency tradeoff seems worth calling out
I built one specifically for Cognition's DeepWiki (https://crates.io/crates/dw2md) -- but it's rather narrow. Something more general like this clearly has more utility.
> Every MCP server injects its full tool schemas into context on every turn
I consider this a bug. I'm sure the chat clients will fix this soon enough.
Something like: on each turn, a subagent searches available MCP tools for anything relevant. Usually, nothing helpful will be found and the regular chat continues without any MCP context added.
I'll add to your comment that it isn't a bug of MCP itself. MCP doesn't specify what the LLM sees. It's a bug of the MCP client.
In my toy chatbot, I implement MCP as pseudo-python for the LLM, dropping typing info, and giving the tool infos as abruptly as possible, just a line - function_name(mandatory arg1 name, mandatory arg2 name): Description
(I don't recommend doing that, it's largely obsolete, my point is simply that you feed the LLM whatever you want, MCP doesn't mandate anything. tbh it doesn't even mandate that it feeds into a LLM, hence the MCP CLIs)
Yup, routing is key. Just like how we've had RAG so we don't have to add every biz doc to the context.
I agree with the general idea that models are better trained to use popular cli tools like directory navigation etc, but outside of ls and ps etc the difference isn't really there, new clis are just as confusing to the model as new mcps.
I don’t think so. Without a list of tools in context the ai can’t even know what options it has, so a RAG like search doesn’t feel like it would be anywhere near as accurate
I may be showing my ignorance here, but wouldn't the ideal situation be for the service to use the same number of tokens no matter what client sent the query?
If the service is using more tokens to produce the same output from the same query, but over a different protocol, than the service is a scam.
That doesn't explain why the protocol matters. Surely for equivalent responses, you need to send equivalent payloads. You shouldn't be able to hack this from the client side.
Someone had to do it. mcp in bash would make them composable, which I think is the strongest benefit for high capability agents like Claude, Cursor and the like, who can write Bash better than I. Haven't gotten into MCP since early release because of the issues you named. Nice work!
Essentially I've cloned thousands of mcp servers, used the readmes and the star rating to respond to the qdrant query (star ratings as a boost score have been an attack vector, yes I know, it's an incomplete product [1]), then presents it as a JSON response with "one-shots" which this author calls clis.
I think I became discouraged from working on it and moved on because my results weren't that great but search is hard and I shouldn't give up.
I'll get back on it seeing how good this tool is getting traction.
[1] There needs to be a legitimacy post-filter so that github user micr0s0ft or what-have-you doesn't go to to the top - I'm sure there's some best-of-practice ways of doing this and I shouldn't invent my own (which would involve seeing if the repo appears on non-UGC sites I guess?!) but I haven't looked into it
The intention is to get this working better and then provide it as a free api and also post the entire qdrant database (or whatever is eventually used) for off-line use.
This will pair with something called a "credential file" which will be a [key, repo] pair. There's an attack vector if you don't pair them up. (You could have an mcp server for some niche thing, get on the aggregators, get fake stars, change the the code to be to a fraud version of a popular mcp server, harvest real api keys from sloppy tooling and MitM)
Anyway, we're talking about 1000s of documents at the most, maybe 10,000. So it's entirely givable away as free.
If you like this project, please tell me. Your encouragement means a lot to me!
I don't want to spend my time on things that nobody seems to be interested in.
For a typical B2B SaaS usecase (non technical employees) -> MCP is working great since its allows people to work in Chat interfaces (ChatGPT, Claude). They will not move to terminal UX's anytime soon.
So, I dont see why a typical productivity app build CLI than MCP. Am I missing anything?
Doubtful that a 16 tokens summary is the same as she JSON tool description that uses 10x more tokens. The JSON will describe parameters in a longer way and that has probably some positive impact on accuracy
Cool, adding this to my list of MCP CLIs:
Also https://github.com/mcpshim/mcpshim
It turns out everyone is having the same idea.
Precisely, there are about 100 of these, and everyone makes a new one every week.
This is entirely predictable: we get an army of vibe coders, vibe coding up tools to make vibe coding easier.
there is nobody making a new one ever week.
We had `curl`, HTTP and OpenAPI specs, but we created MCP. Now we're wrapping MCP into CLIs...
MCP is a dead end, just ignore it and it will go away.
It’s not, they are a big unlock when using something like cursor or copilot. I think people who say this don’t quite know what MCP is, it’s just a thin wrapper around an API that describes its endpoints as tools. How is there not a ton of value in this?
MCP is the future in enterprise and teams.
It's as you said: people misunderstand MCP and what it delivers.
If you only use it as an API? Useless. If you use it on a small solo project? Useless.
But if you want to share skills across a fleet of repos? Deliver standard prompts to baseline developer output and productivity? Without having to sync them? And have it updated live? MCP prompts.
If you want to share canonical docs like standard guidance on security and performance? Always up to date and available in every project from the start? No need to sync and update? MCP resources.
If you want standard telemetry and observability of usage? MCP because now you can emit and capture OTEL from the server side.
If you want to wire execution into sandboxed environments? MCP.
MCP makes sense for org-level agent engineering but doesn't make sense for the solo vibe coder working on an isolated codebase locally with no need to sandbox execution.
People are using MCP for the wrong use cases and then declaring them excess when the real use case is standardizing remote delivery and of skills and resources. Tool execution is secondary.
And yet without MCP these CLI generators wouldn't be possible.
It building on top of them, because MCP did address some issues (which arguably could've been solved better with clis to begin with - like adding proper help texts to each command)... it just also introduced new ones, too.
Some of which still won't be solved via switching back to CLI.
The obvious one being authentication and privileges.
By default, I want the LLM to be able to have full read only access. This is straightforward to solve with an MCP because the tools have specific names.
With CLI it's not as straightforward, because it'll start piping etc and the same CLI is often used both for write and read access.
All solvable issues, but while I suspect CLIs are going to get a lot more traction over the next few months, it's still not the thing we'll settle on- unless the privileges situation can be solved without making me greenlight commands every 2 seconds (or ignoring their tendency to occasionally go batshit insane and randomly wipe things out while running in yolo mode)
Exactly. Once you start looking at MCP as a protocol to access remote OAuth-protected resources, not an API for building agents, you realize the immense value
> but we created MCP. Now we're wrapping MCP into CLIs...
Next we'll wrap the CLIs into MCPs.
MCP only exists because there's no easy way for AI to run commands on servers.
Oh wait there's ssh. I guess it's because there's no way to tell AI agents what the tool does, or when to invoke it... Except that AI pretty much knows the syntax of all of the standard tools, even sed, jq, etc...
Yeah, ssh should've been the norm, but someone is getting promoted for inventing MCP
No it’s more like - because AI can’t know every endpoint and what it does, so MCP allows for injecting the endpoints and a description into context so the ai can choose the right tool without additions steps
Agents can't write bash correctly so... I wonder about your claim
They cannot? We have a client from 25 years ago and all the devops for them are massive bash scripts; 1000s of them. Not written by us (well some parts as maintenance) and really the only 'thing' that almost always flawlessly fixes and updates them is claude code. Even with insane bash in bash in bash escaping and all kinds of not well known constructs. It works. So we habe no incentive to refactor or rewrite. We did 5 years ago and postponed as we first had to rewrite their enormous and equally badly written ERP for their factory. Maybe that would not have happened either now...
This looks useful.
One pattern we've been seeing internally is that once teams standardize API interactions through a single interface (or agent layer), debugging becomes both easier and harder.
Easier because there's a central abstraction, harder because failures become more opaque.
In production incidents we often end up tracing through multiple abstraction layers before finding the real root cause.
Curious if you've built anything into the CLI to help with observability or tracing when something fails.
Tokens saved should not be your north star metric. You should be able to show that tool call performance is maintained while consuming fewer tokens. I have no idea whether that is the case here.
As an aside: this is a cool idea but the prose in the readme and the above post seem to be fully generated, so who knows whether it is actually true.
Token counts alone tell you nothing about correctness, latency, or developer ergonomics. Run a deterministic test suite that exercises representative MCP calls against both native MCP and mcp2cli while recording token usage, wall time, error rate, and output fidelity.
Measure fidelity with exact diffs and embedding similarity, and include streaming behavior, schema-change resilience, and rate-limit fallbacks in the cases you care about. Check the repo for a runnable benchmark, archived fixtures captured with vcrpy or WireMock, and a clear test harness that reproduces the claimed 96 to 99 percent savings.
The AI prose is getting so tiring to read
"We measured this. Not estimates — actual token counts using the cl100k_base tokenizer against real schemas, verified by an automated test suite."
Nice project! I've been working on something very similar here https://github.com/max-hq/max
It works by schematising the upstream and making data locally synchronised + a common query language, so the longer term goals are more about avoiding API limits / escaping the confines of the MCP query feature set - i.e. token savings on reading data itself (in many cases, savings can be upwards of thousands of times fewer tokens)
Looking forward to trying this out!
Why is the concept of "MCP" needed at all? Wouldn't a single tool - web access - be enough? Then you can prompt:
And then the tools url can simply return a list of urls in plain text like Which return the data in plain text.What additional benefits does MCP bring to the table?
A few things: in this case, you have to provide the tool list in your prompt for the AI to know it exists. But you probably want the AI agent to be able to act and choose tools without you micromanaging and reminding it in every prompt, so then you'd need a tool list... and then you're back to providing the tool list automatically ala MCP again.
MCP can provide validation & verification of the request before making the API call. Giving the model a /tool/forecast URL doesn't prevent the model from deciding to instead explore what other tools might be available on the remote server instead, like deciding to try running /tool/imagegenerator or /tool/globalthermonuclearwar. MCP can gatekeep what the AI does, check that parameters are valid, etc.
Also, MCP can be used to do local computation, work with local files etc, things that web access wouldn't give you. CLI will work for some of those use cases too, but there is a maximum command line length limit, so you might struggle to write more than 8kB to a file when using the command line, for example. It can be easier to get MCP to work with binary files as well.
I tend to think of local MCP servers like DLLs, except the function calls are over stdio and use tons of wasteful JSON instead of being a direct C-function call. But thinking of where you might use a DLL and where you might call out to a CLI can be a useful way of thinking about the difference.
The point is authorization. With full web access, your agent can reach anything and leak anything.
You could restrict where it can go with domain allowlists but that has insufficient granularity. The same URL can serve a legitimate request or exfiltrate data depending on what's in the headers or payload: see https://embracethered.com/blog/posts/2025/claude-abusing-net...
So you need to restrict not only where the agent can reach, but what operations it can perform, with the host controlling credentials and parameters. That brings us to an MCP-like solution.
But this is no different to using an API key with access controls and curl and you get the same thing.
MCP is just as worse version of the above allowing lots of data exfiltration and manipulation by the LLM.
But MCP uses Oauth. That is not a "worse version" of API keys. It is better.
The classic "API key" flow requires you to go to the resource site, generate a key, copy it, then paste it where you want it to go.
Oauth automates this. It's like "give me an API key" on demand.
An MCP server lets you avoid giving the agent your API key so it can't leak it. At least in theory.
You could do the same with a CLI tool but it's more of a hassle to set up.
For me (actually trying to get shit done using this stuff) it's validation.
Being able to have a verifiable input/output structure is key. I suppose you can do that with a regular http api call (json) but where do you document the openapi/schema stuff? Oh yeah...something like mcp.
I agree that mcp isn't as refined as it should be, but when used properly it's better than having it burn thru tokens by scraping around web content.
One thing that I currently find useful on MCPs is granular access control.
Not all services provide good token definition or access control, and often have API Key + CLI combo which can be quite dangerous in some cases.
With an MCP even these bad interfaces can be fixed up on my side.
The prophecy of the hypermedia web
I feel like I haven’t read anything about this in combination with mcp and like I am taking crazy pills: does no one remember hateoas?
Proxying / gatekeeping
the token math is compelling but I'm curious about the discovery step. with native MCP the host already knows what tools exist. with this, the agent has to run --list first, which means extra roundtrips. for 120 tools that might still be a net win, but the latency tradeoff seems worth calling out
cool!
anthropic mentions MCPs eating up context and solutions here: https://www.anthropic.com/engineering/code-execution-with-mc...
I built one specifically for Cognition's DeepWiki (https://crates.io/crates/dw2md) -- but it's rather narrow. Something more general like this clearly has more utility.
There are a handful of these. I've been using this one: https://github.com/smart-mcp-proxy/mcpproxy-go
> Every MCP server injects its full tool schemas into context on every turn
I consider this a bug. I'm sure the chat clients will fix this soon enough.
Something like: on each turn, a subagent searches available MCP tools for anything relevant. Usually, nothing helpful will be found and the regular chat continues without any MCP context added.
Absoultely.
I'll add to your comment that it isn't a bug of MCP itself. MCP doesn't specify what the LLM sees. It's a bug of the MCP client.
In my toy chatbot, I implement MCP as pseudo-python for the LLM, dropping typing info, and giving the tool infos as abruptly as possible, just a line - function_name(mandatory arg1 name, mandatory arg2 name): Description
(I don't recommend doing that, it's largely obsolete, my point is simply that you feed the LLM whatever you want, MCP doesn't mandate anything. tbh it doesn't even mandate that it feeds into a LLM, hence the MCP CLIs)
That’s a trade off, now you need multiple model calls for every single request
Yup, routing is key. Just like how we've had RAG so we don't have to add every biz doc to the context.
I agree with the general idea that models are better trained to use popular cli tools like directory navigation etc, but outside of ls and ps etc the difference isn't really there, new clis are just as confusing to the model as new mcps.
Yes we just RAG to be applied on tools. Very simple to implement.
I don’t think so. Without a list of tools in context the ai can’t even know what options it has, so a RAG like search doesn’t feel like it would be anywhere near as accurate
How is this the 5th one of these I have seen this week, is everyone just trying to make the same thing?
Basically yes.
Time is a flat circle...
I may be showing my ignorance here, but wouldn't the ideal situation be for the service to use the same number of tokens no matter what client sent the query?
If the service is using more tokens to produce the same output from the same query, but over a different protocol, than the service is a scam.
When you're using an agent, the "query" isn't just each bit of text you enter into the agent prompt. It's the whole conversation.
But I do wonder about these tools whether they have tested that the quality of subsequent responses is the same.
That doesn't explain why the protocol matters. Surely for equivalent responses, you need to send equivalent payloads. You shouldn't be able to hack this from the client side.
How does this differ from mcporter? https://github.com/steipete/mcporter/
Someone had to do it. mcp in bash would make them composable, which I think is the strongest benefit for high capability agents like Claude, Cursor and the like, who can write Bash better than I. Haven't gotten into MCP since early release because of the issues you named. Nice work!
How would the LLM exactly discover such unknown CLI commands?
I've got a qdrant based approach that I'm working on that solves that here: https://github.com/day50-dev/infinite-mcp
Essentially I've cloned thousands of mcp servers, used the readmes and the star rating to respond to the qdrant query (star ratings as a boost score have been an attack vector, yes I know, it's an incomplete product [1]), then presents it as a JSON response with "one-shots" which this author calls clis.
I think I became discouraged from working on it and moved on because my results weren't that great but search is hard and I shouldn't give up.
I'll get back on it seeing how good this tool is getting traction.
[1] There needs to be a legitimacy post-filter so that github user micr0s0ft or what-have-you doesn't go to to the top - I'm sure there's some best-of-practice ways of doing this and I shouldn't invent my own (which would involve seeing if the repo appears on non-UGC sites I guess?!) but I haven't looked into it
Skills or tell it the --list command would be my guess.
Cool to see this!
I started a similar project in January but but nobody seemed interested in it at the time.
Looks like I'll get back on that.
https://github.com/day50-dev/infinite-mcp
Essentially
(1) start with the aggregator mcp repos: https://github.com/day50-dev/infinite-mcp/blob/main/gh-scrap... . pull all of them down.
(2) get the meta information to understand how fresh, maintained, and popular the projects are (https://github.com/day50-dev/infinite-mcp/blob/main/gh-get-m...)
(3) try to extract one-shot ways of loading it (npx/uvx etc) https://github.com/day50-dev/infinite-mcp/blob/main/gh-one-l...
(4) insert it into what I thought was qdrant but apparently I was still using chroma - I'll change that soon
(5) use a search endpoint and an mcp to seach that https://github.com/day50-dev/infinite-mcp/blob/main/infinite...
The intention is to get this working better and then provide it as a free api and also post the entire qdrant database (or whatever is eventually used) for off-line use.
This will pair with something called a "credential file" which will be a [key, repo] pair. There's an attack vector if you don't pair them up. (You could have an mcp server for some niche thing, get on the aggregators, get fake stars, change the the code to be to a fraud version of a popular mcp server, harvest real api keys from sloppy tooling and MitM)
Anyway, we're talking about 1000s of documents at the most, maybe 10,000. So it's entirely givable away as free.
If you like this project, please tell me. Your encouragement means a lot to me!
I don't want to spend my time on things that nobody seems to be interested in.
How is it different from 'mcporter', already included in eg. openclaw?
For a typical B2B SaaS usecase (non technical employees) -> MCP is working great since its allows people to work in Chat interfaces (ChatGPT, Claude). They will not move to terminal UX's anytime soon.
So, I dont see why a typical productivity app build CLI than MCP. Am I missing anything?
I kind of feel like it might be better to go from CLI to MCP.
mcp just need to add dynamic tools discovery and lazy load them, that would solve this token problem right?
Doubtful that a 16 tokens summary is the same as she JSON tool description that uses 10x more tokens. The JSON will describe parameters in a longer way and that has probably some positive impact on accuracy
MCP itself is a flawed standard to being with as I said before [0] and its wraps around an API from the start.
You might as well directly create a CLI tool that works with the AI agents which does an API call to the service anyway.
[0] https://news.ycombinator.com/item?id=44479406
This post and the project README are obviously generated slop, which personally makes me completely skip the project altogether, even if it works.
If you want humans to spend time reading your prose, then spend time actually writing it.