Hacker Newsnew | past | comments | ask | show | jobs | submit | pmig's commentslogin

I am still waiting for the day where gMessage (currently called "Messages, Hangouts, or who knows what) & iMessage will be interoperable. The RCS standard is already out there


Thanks god we built all our infra on top of EKS, so everything works smoothly =)


Indeed, they are really pretty similar, but ours is actually also Open Source: https://github.com/hyprmcp/mcp-install-instructions-generato...


After taking a closer look at MCPCat the implementation is quite different for MCPCat you need to integrate a SDK and HyprMCP works as a proxy in front for your already existing MCP server.


The token spend only increases due to the additional parameter names and descriptions, right?


Not just that - for every tool call, the user's agent has to output some extra tokens to put the context info in the additional argument.


Congrats on the Launch, we build something similar [1], but opted to make name also optional to reduce friction even more. We also built an MCP server that serves these installation instructions :-)

[1]: https://github.com/hyprmcp/mcp-install-instructions-generato...


This is so cool


There is also a first party registry[1] in development, hopefully becoming the next artifact hub for MCP servers.

[1]: https://github.com/modelcontextprotocol/registry


Interesting, have you thought converting the MCP server to a remote http server?


That's a good point, we really think that the future of MCP servers are remote servers, as running "random" software that has little to no boundaries, no verification or similar shouldn't be a thing. Is there a specific reason, you prefer stio servers over http servers? Which servers are you using?

Thanks for the mcpb hint, we will look into it.


> Is there a specific reason, you prefer stio servers over http servers?

Yes: the main reason is that I control which applications are configured with the command/args/environment to run the MCP server, instead of exposing a service on my localhost that any process on my computer can connect to (or worse, on my network if it listens on all interfaces).

I mostly run MCP servers that I've written, but otherwise most of the third party ones I use are related to software development and AI providers (e.g. context7, Replicate, ElevenLabs…). The last two costs me money when their tools are invoked, so I'm not about to expose them on a port given that auth doesn't happen at the protocol level.


> as running "random" software that has little to no boundaries, no verification or similar shouldn't be a thing

Would you class all locally running software this way, and all remotely running software the inverse?


Most software we install locally is at least distributed via a trusted party (App Store, Play Store, Linux package repos, etc) and have a valid signatur (Desktop & Mobile) or are contained in some way (containers, browser extensions, etc..).

In the case of MCP, remote servers at least protect you from local file leakages.


This is actually a huge milestone and can be similar to ArtifactHUB in the CNCF space. ArtifactHUB generally is a great place to publish and look up Helm charts. Especially verified and and official badges help a lot.

[0]: https://artifacthub.io/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: