- https://github.com/apify/mcpc
- https://github.com/chrishayuk/mcp-cli
- https://github.com/wong2/mcp-cli
- https://github.com/f/mcptools
- https://github.com/adhikasp/mcp-client-cli
- https://github.com/thellimist/clihub
- https://github.com/EstebanForge/mcp-cli-ent
- https://github.com/knowsuchagency/mcp2cli
- https://github.com/philschmid/mcp-cli
- https://github.com/steipete/mcporter
- https://github.com/mattzcarey/cloudflare-mcp
- https://github.com/assimelha/cmcp Tell me the hottest day in Paris in the
coming 7 days. You can find useful tools
at www.weatherforadventurers.com/tools
And then the tools url can simply return a list of urls in plain text like /tool/forecast?city=berlin&day=2026-03-09 (Returns highest temp and rain probability for the given day in the given city)
Which return the data in plain text.What additional benefits does MCP bring to the table?
Isn’t this somewhat misleading? Any system context is going to be added “per turn” because it’s included in the first turn.
Is any context removed on a turn by turn basis (aside from thinking?)
As an aside: this is a cool idea but the prose in the readme and the above post seem to be fully generated, so who knows whether it is actually true.
It works by schematising the upstream and making data locally synchronised + a common query language, so the longer term goals are more about avoiding API limits / escaping the confines of the MCP query feature set - i.e. token savings on reading data itself (in many cases, savings can be upwards of thousands of times fewer tokens)
Looking forward to trying this out!
I consider this a bug. I'm sure the chat clients will fix this soon enough.
Something like: on each turn, a subagent searches available MCP tools for anything relevant. Usually, nothing helpful will be found and the regular chat continues without any MCP context added.
anthropic mentions MCPs eating up context and solutions here: https://www.anthropic.com/engineering/code-execution-with-mc...
I built one specifically for Cognition's DeepWiki (https://crates.io/crates/dw2md) -- but it's rather narrow. Something more general like this clearly has more utility.
The analogy I'd draw is database query planning: you don't load the entire schema into memory before every query, you resolve references on demand. Same principle here. Does the CLI maintain a tool cache between invocations, or does it re-fetch schemas each time?
One pattern we've been seeing internally is that once teams standardize API interactions through a single interface (or agent layer), debugging becomes both easier and harder.
Easier because there's a central abstraction, harder because failures become more opaque.
In production incidents we often end up tracing through multiple abstraction layers before finding the real root cause.
Curious if you've built anything into the CLI to help with observability or tracing when something fails.
If the service is using more tokens to produce the same output from the same query, but over a different protocol, than the service is a scam.
So, I dont see why a typical productivity app build CLI than MCP. Am I missing anything?
I started a similar project in January but but nobody seemed interested in it at the time.
Looks like I'll get back on that.
https://github.com/day50-dev/infinite-mcp
Essentially
(1) start with the aggregator mcp repos: https://github.com/day50-dev/infinite-mcp/blob/main/gh-scrap... . pull all of them down.
(2) get the meta information to understand how fresh, maintained, and popular the projects are (https://github.com/day50-dev/infinite-mcp/blob/main/gh-get-m...)
(3) try to extract one-shot ways of loading it (npx/uvx etc) https://github.com/day50-dev/infinite-mcp/blob/main/gh-one-l...
(4) insert it into what I thought was qdrant but apparently I was still using chroma - I'll change that soon
(5) use a search endpoint and an mcp to seach that https://github.com/day50-dev/infinite-mcp/blob/main/infinite...
The intention is to get this working better and then provide it as a free api and also post the entire qdrant database (or whatever is eventually used) for off-line use.
This will pair with something called a "credential file" which will be a [key, repo] pair. There's an attack vector if you don't pair them up. (You could have an mcp server for some niche thing, get on the aggregators, get fake stars, change the the code to be to a fraud version of a popular mcp server, harvest real api keys from sloppy tooling and MitM)
Anyway, we're talking about 1000s of documents at the most, maybe 10,000. So it's entirely givable away as free.
If you like this project, please tell me. Your encouragement means a lot to me!
I don't want to spend my time on things that nobody seems to be interested in.
You might as well directly create a CLI tool that works with the AI agents which does an API call to the service anyway.
If you want humans to spend time reading your prose, then spend time actually writing it.